Your local AI just leveled up to multiplayer. Parallax is the easiest way to build your own AI cluster to run the best large language models across devices, no matter their specs or location.
Everyone loves free, private LLMs. But today, they’re still not as scalable or easy to use as they should be.
We’ve always felt that local AI should be as powerful as it is personal, and this is why we built Parallax.
Parallax started from a simple question: what if your laptop could host more than just a small model? What if you could tap in to other devices — friends, teammates, your other machines — and run something much bigger, together?
We made that possible. It’s the first framework to serve models, fully distributedly, across devices, regardless of hardware or location.
No one will ever be gpu-poor again!
In benchmarks, Parallax already surpasses other popular local AI projects and frameworks, and this is just the beginning. We’re working on LLM inference optimization techniques and deeper system-level improvements to make local AI faster, smoother, and so natural it feels almost invisible.
Parallax is completely free to use, and we’d love for you to try it and build with us!
Really exciting idea — turning idle devices into a distributed inference cluster feels practical and privacy-friendly.
Quick question: how do you handle latency and bandwidth variability across WAN peers to keep inference smooth for real-time apps? Would love clarity on any built-in QoS or fallback strategies.
Love the concept of hosting LLMs across devices with shared GPU! As a UI/UX designer who's worked with 200+ products, I'm curious: how did you design the coordination experience between different devices? Making distributed computing feel seamless for developers is such a fascinating UX challenge. Congrats on the launch!
Parallax makes it easy to build your own AI cluster run top-tier LLMs across any device, regardless of specs or location. Scalable intelligence, now in your hands.
Wow! This is the project I was "hunting" for so long! Thank you Gradient for building this and open sourcing it. Here comes the wave of innovators. I see so many possibilities - are you partnering with HuggingFace, Apple/MLX, Ollama...? Would love to hear about your roadmap and hopefully contribute to this lovely project.
This is cool! Is it also possible to connect the GPU to the public network so that it can be accessed remotely from different locations?
This is cool! Is it also possible to connect the GPU to the public network so that it can be accessed remotely from different locations?
This is cool! Is it also possible to connect the GPU to the public network?
This is cool! Is it also possible to connect the GPU to the public network?
This is a fantastic concept.
It immediately sparked an idea: What if you added an incentive layer or a token economy on top of this?
Users could contribute their idle hardware (GPUs) to the global network and earn tokens based on the compute power they provide. These tokens could then be used by other users to "pay" for their inference tasks on the network.
This would turn Parallax from a collaborative tool for trusted peers (friends, teammates) into a fully decentralized, global marketplace for AI compute. It seems like the logical next step to truly ensure "no one will ever be GPU-poor again!"
Great work by the Gradient team!
Is Parallax designed mainly for LLM inference, or could it also support other workloads like diffusion models or RAG pipelines?
Big fan of open models. Do you plan to open source the orchestration logic as well?
How do you handle latency when devices are spread across different networks?
This is such a smart direction , distributed local AI cloud really change how people think about compute access .
Have you thought about adding a way for users to share or rent out idle GPU power within a trusted network ?
Congrats on shipping this!
The messaging is sharp; I mean, ‘AI goes brrr’ has substance behind it. Love it!
Distributed compute made simple is a killer value prop if you nail onboarding and clarity around use cases.
This is a great concept, which enables “micro sovereignty” (let’s put it this way 😉) for individuals or even corporates.
Think of leveraging idle GPUs all over the world… A new market for AI computations capacities? 🤔
Great stuff, team 👏🏻
Really love what you’re building here, Parallax tackles distributed inference beautifully. We’re working on GraphBit, which focuses on the orchestration layer- making AI agents run reliably and concurrently once those models are deployed. Feels like what you’re building (how models run) and what we’re solving (how intelligence executes) could complement each other perfectly. Would love to explore possible collaboration! My email- [email protected]
Pooling devices for distributed LLMs is terrific.... my laptop always struggles solo :( Does Parallax handle dynamic connections if friends drop in and out mid-session? Would love to try out asap!
Looks pretty great! I wonder if it's possible to add more models, as it seems like there are only two available right now.
Is there any latency overhead when coordinating inference across heterogeneous hardware?
Hello Product Hunt 👋,
Everyone loves free, private LLMs. But today, they’re still not as scalable or easy to use as they should be.
We’ve always felt that local AI should be as powerful as it is personal, and this is why we built Parallax.
Parallax started from a simple question: what if your laptop could host more than just a small model? What if you could tap in to other devices — friends, teammates, your other machines — and run something much bigger, together?
We made that possible. It’s the first framework to serve models, fully distributedly, across devices, regardless of hardware or location.
No one will ever be gpu-poor again!
In benchmarks, Parallax already surpasses other popular local AI projects and frameworks, and this is just the beginning. We’re working on LLM inference optimization techniques and deeper system-level improvements to make local AI faster, smoother, and so natural it feels almost invisible.
Parallax is completely free to use, and we’d love for you to try it and build with us!