Discover, download, and run local LLMs (incl. DeepSeek R1)
🤖 • Run LLMs on your laptop, entirely offline 📚 • Chat with your local documents 👾 • Use models through the in-app Chat UI or an OpenAI compatible local server
Want to get on the DeepSeek hype train but don't want your data to be sent to China? Cool!
You can run DeepSeek R1 models locally with LM Studio if you have enough RAM.
Here's how to do it:
1. Download LM Studio for your operating system from here.
2. Click the 🔎 icon on the sidebar and search for "DeepSeek"
3. Pick an option that will fit on your system. For example, if you have 16GB of RAM, you can run the 7B or 8B parameter distilled models. If you have ~192GB+ of RAM, you can run the full 671B parameter model.
4. Load the model in the chat, and start asking questions!
Of course, you can also run other models locally using LM Studio, like @Llama 3.2, @Mistral AI, Phi, Gemma, @DeepSeek AI, and Qwen 2.5.