Product Thumbnail

Apollo AI

Run local models like Llama on iOS

iOS
Developer Tools
Artificial Intelligence

Apollo AI is an app for running local models privately on your iOS device. Once downloaded, these models can be used offline with no internet connection at all. Try Llama 3.1, Qwen, Deepseek r1 Distills, and more.

Top comment

Hey everyone, Apollo is something I built to test local models on my iOS device. It can also: 1. Connect to local or custom AI servers that use the OpenAI standard And 2. Connect to OpenRouter, if you login. There’s no tracking or anything, other than crash reporting and the local models like Llama 3.2 or Qwen work fully offline.

Comment highlights

@aaronykng Grats on the launch!

I recommend Apollo AI to anyone looking for a way to use powerful AI models right on their iOS device! This app gives you complete freedom to work offline, without a constant internet connection, which is great for traveling or working in conditions with limited network access

@aaronykng The concept is promising, particularly with its OpenRouter integration, but the application suffers from several significant issues. A critical flaw is in the voice chat functionality: the app continues to process audio input while delivering AI responses, creating an endless feedback loop where the AI responds to its own output. This effectively makes the voice chat unusable. Additionally, frequent encounters of "provider errors" without clear explanations. Despite these challenges, @Apollo AI shows potential, and I really hope there will be future updates and continued development support.

How does this compare with @fullmoon?

Congrats on the launch @aaronykng! It's fun to see another local model launch today: https://www.producthunt.com/post.... How does Apollo AI compare to Llamao?

@aaronykng This is a really awesome app! 👏 Impressive onboarding UX and beautiful overall app design. A few questions + suggestions: 1. When it prompts me for a local model, what is the "recommended model"? It's hard to understand the tradeoffs. It might be helpful for less technical users to just select a model by default (the "best" one), and then have an advanced section to customize it for their particular use case, highlighting the tradeoffs. 2. It might be helpful to let the user how much storage left they have on their device, and only show models that will actually fit. 3. Some models have a download link, but others don't. For example, it won't let me download deepseek-r1-distill-llama-8b-8bit-mix, even though I have the storage space. Or is the iphone 15 pro just not capable of running that model? 4. I imagine this will chew through the battery pretty hard. Do you have any numbers that you can share?

Congrats @aaronykng for the launch - good luck with. Curious about how is performance optimized for local models on iOS? thanks

Great job! The offline functionality and privacy-first approach, especially with models like Llama 3.2 or Qwen, make it highly practical for developers.