Product upvotes vs the next 3

Waiting for data. Loading

Product comments vs the next 3

Waiting for data. Loading

Product upvote speed vs the next 3

Waiting for data. Loading

Product upvotes and comments

Waiting for data. Loading

Product vs the next 3

Loading

Alpie Core

A 4-bit reasoning model with frontier-level performance

Alpie Core is a 32B reasoning model trained, fine-tuned, and served entirely at 4-bit precision. Built with a reasoning-first design, it delivers strong performance in multi-step reasoning and coding while using a fraction of the compute of full-precision models. Alpie Core is open source, OpenAI-compatible, supports long context, and is available via Hugging Face, Ollama, and a hosted API for real-world use.

Top comment

Hey builders

Modern AI keeps getting better, but only if you can afford massive GPUs and memory. We didn’t think that was sustainable or accessible for most builders, so we took a different path.

Alpie Core is a 32B reasoning model trained, fine-tuned, and served entirely at 4-bit precision. It delivers strong multi-step reasoning, coding, and analytical performance while dramatically reducing memory footprint and inference cost, without relying on brute-force scaling.

It supports 65K context, is open source (Apache 2.0), OpenAI-compatible, and runs efficiently on practical, lower-end GPUs. You can use it today via Hugging Face, Ollama, our hosted API, or the 169Pi Playground.

To keep you building over Christmas and the New Year, we’re offering 5 million free tokens on your first API usage, so you can test, benchmark, and ship without friction.

This launch brings the model, benchmarks, api access, and infrastructure together in one place, and we’d love feedback from builders, researchers, and infra teams. Questions, critiques, and comparisons are all welcome as we shape v2.