Product Thumbnail

Kimi K2 Thinking

The 1T Parameters Open-Source Thinking Model - SOTA on HLE

Open Source
Artificial Intelligence
Development

🔹 SOTA on HLE (44.9%) and BrowseComp (60.2%) 🔹 Executes up to 200 – 300 sequential tool calls without human interference 🔹 Excels in reasoning, agentic search, and coding 🔹 256K context window

Top comment

👋 Hello from Kimi Team!

Introducing Kimi K2 Thinking: 1T Open-Source Reasoning Model. SOTA with 44.9% HLE, 60.2% BrowseComp. (not just open-source SOTA)

> Trillion-param MoE, trained for $4.6M, 4x cheaper than peers.
> INT4 inference: 4-bit quantized, <1.2s latency @ 256K context.
> Full step-by-step reasoning, 200+ tool calls, self-correction (GPT-5 level), fully open (MIT), OpenAI-compatible API, weights live on huggingface today, agentic mode next week.

We're thrilled to ship a SOTA model that's fully open. Can't wait to see what you all build! :)

Comment highlights

It’s a bit inconvenient that you can’t use it without registering. You even need to sign up for the first search. But the project is interesting anyway!

Woah, quantized INT4 inference is a big deal here! Congrats on the launch!

Hi everyone!

The K2 model from July was already strong, but it wasn't a "Thinking" model.

Kimi K2 Thinking is a new generation thinking agent model, built on Moonshot's "model as agent" philosophy. The key difference is that it natively understands how to "think while using tools."

It can build a real, functional Word editor:

And it can also create a world of complex, gorgeous voxel art: