Product upvotes vs the next 3

Waiting for data. Loading

Product comments vs the next 3

Waiting for data. Loading

Product upvote speed vs the next 3

Waiting for data. Loading

Product upvotes and comments

Waiting for data. Loading

Product vs the next 3

Loading

Kuzco

Open-source Swift package to run LLMs locally on iOS & macOS

Kuzco is a Swift package for integrating large language models (LLMs) directly into iOS, macOS, and Mac Catalyst apps. Built on `llama.cpp`, it offers customizable prompts, flexible tuning, and async/await-friendly APIs for on-device AI.

Top comment

Hey everyone! I built Kuzco for two big reasons: 1. Developers can cut costs by skipping cloud APIs and running AI locally on‑device, completely free. 2. I was done sending my data (and users’ data) to third‑party servers. Kuzco is a straightforward Swift wrapper for llama.cpp, letting you plug local models into iOS and macOS apps with just a few lines of code. It’s fully open source, so dive in, tweak away, and if you use it (or even plan to) please drop me a link on X (Twitter), so I can cheer you on!