Local AI coding assistant for your terminal using Ollama
Billy is a terminal-native AI coding assistant that runs entirely on your machine with Ollama. Unlike cloud copilots, it keeps your code private, works offline, remembers context across sessions, resumes chat history, and can suggest or run shell commands with your approval. It also offers a bundled build with Ollama included, so setup is fast and local from day one. Free tier available, with a one-time Pro upgrade starting at $19.
I built Billy because I wanted a coding assistant that felt like Copilot CLI, but without a monthly bill and without sending my code to the cloud.
Billy runs locally with Ollama, lives in the terminal, and is built for developers who want a private, fast, low-friction workflow. The core idea is simple: local AI should be good enough to use every day, and it should be easy to own instead of rent.
What makes Billy different right now:
- It runs entirely on your machine
- No API keys, no cloud dependency, no data leaving your computer - Terminal-native chat and one-shot commands - Agentic mode that suggests and runs shell commands with your approval
- Memory, session history, and model switching built in - Optional full build that bundles Ollama for easier setup It’s still pre-alpha, so I’d value honest feedback more than anything else, especially on:
- onboarding and install flow
- whether the local-first pitch is compelling - which features matter most before a wider launch.
If you try it, let me know where it feels useful, where it feels rough, and what would make you switch from your current setup.