Hey LocalLLaMA community,
About 6 months ago I shared a closed source developer tool I built. It actually got a decent amount of traction but internal adoption sucked because no established company was willing to pipe terabytes of data into some small startup’s cloud. Since then I spoke with the community and I decided to open source the tool. I’m currently building in the open and when I have a functioning tool (hopefully next month) I want to have some killer tutorials already made to share out. I’d love to hear what tutorial ideas or use cases you can come up with, here is a little rundown about the tool.
Burla is the simplest cluster compute software, it enables users to scale their code across thousands of cloud machines, with zero setup, and a single line of code. Here is a high level feature set.
Free and open-source software Installable in your cloud with one command A Python package with one function and two arguments Scales to thousands of VMs in under 1 second Deploys code to any hardware (GPUs) and any software environment (Docker) A tool that just works. Burla automatically syncs packages, re-raises exceptions, and streams back stdout/stderr
Important to note that right now we don’t support inter-node communication so I’m looking for embarrassingly parallel problems. Think ensemble training, tokenization, batch inference, and data compression.
I look forward to hearing from the community and hopefully I can build out some tutorials that solve real issues.
💬 Discussion r/LocalLLaMA (3 points, 2 commentaires)