π€ β’ Run LLMs on your laptop, entirely offline π β’ Chat with your local documents πΎ β’ Use models through the in-app Chat UI or an OpenAI compatible local server
Want to get on the DeepSeek hype train but don't want your data to be sent to China? Cool!
You can run DeepSeek R1 models locally with LM Studio if you have enough RAM.
Here's how to do it:
1. Download LM Studio for your operating system from here.
2. Click the π icon on the sidebar and search for "DeepSeek"
3. Pick an option that will fit on your system. For example, if you have 16GB of RAM, you can run the 7B or 8B parameter distilled models. If you have ~192GB+ of RAM, you can run the full 671B parameter model.
4. Load the model in the chat, and start asking questions!
Of course, you can also run other models locally using LM Studio, like @Llama 3.2, @Mistral AI, Phi, Gemma, @DeepSeek AI, and Qwen 2.5.
Congrats @chrismessina@yagilb !
I'm Daniel Founder of Digital Products, you can find us on ProductHunt as well.
We help connect consumers to digital brands in 2 ways:
1. Consumers - reviews are in a short format only, meaning it's easy for them to share what's on their mind about your product, and to do it fast.
2. Digital Brands - reply to consumer reviews and engage directly with your consumers.
Getting listed on Digital Products will give you more exposure to a new audience, and increase your chances of getting new users to LM Studio.
Would you like to partner with us?
Hey Product Hunt, and thanks @chrismessina for the hunt!
LM Studio is a desktop app for Mac / Windows / Linux that makes it super easy to run LLMs on your computer (offline).
You can search and download models from within the app, and then load them in a ChatGPT-like interface and chat away w/o any data privacy concerns, since it's all local.
We support RAG with PDFs, and for some special models ("VLMs") you can provide image input as well.
Can you run DeepSeek R1 (distilled / full)? The answer is yes. We shared a quick blog post about it yesterday: https://lmstudio.ai/blog/deepsee...
If you're a developer, LM Studio comes with a built-in REST API server that listens on /v1/chat/completions, /v1/embeddings (OpenAI compatibility endpoints). This means you can just switch up the "base url" in your OpenAI client code and hit a local model instead. That's how apps like Zed, Continue, Cline, and many more use local models served from LM Studio. More in the docs: https://lmstudio.ai/docs
Extra technical details: we have 2 underlying LLM inference engines. 1 is the venerable llama.cpp, and the 2nd one is MLX from Apple. If you're on Mac, give MLX a try - it's blazing fast.
Let us know if you have any feedback, and join our discord too! (link on lmstudio.ai).
Cheers.
Yagil
Nice work folks! Just curious, what is the main difference between using LM studio and running things locally using Ollama for example? Is the offline part? Or am I missing something else?
LM Studio looks like a fantastic tool! Running large language models entirely offline on my laptop is a game-changer. Excited to see how it enhances local AI development. Congrats on the launch! If you have a free time please check our App! we prepared On-Device LLM :)