Mercury Edit 2 is a coding-focused diffusion LLM built specifically for next-edit prediction. It uses your recent edits and codebase context to suggest the next change, with much higher acceptance and much lower latency than typical code-edit models.
Mercury Edit 2 is not a general chat model for coding. It is purpose-built for next-edit prediction, one of the most latency-sensitive parts of dev workflows.
The interesting part is that it is built on a diffusion architecture, so it generates tokens in parallel instead of one by one, which is exactly why it can feel so fast. Inception is claiming 75.6% quality at 221ms, plus a 48% higher accept rate and 27% fewer shown edits than the previous version.
If you use @Zed , there is a specific API key that unlocks a free 1-month trial.
Hi everyone!
Mercury Edit 2 is not a general chat model for coding. It is purpose-built for next-edit prediction, one of the most latency-sensitive parts of dev workflows.
The interesting part is that it is built on a diffusion architecture, so it generates tokens in parallel instead of one by one, which is exactly why it can feel so fast. Inception is claiming 75.6% quality at 221ms, plus a 48% higher accept rate and 27% fewer shown edits than the previous version.
If you use @Zed , there is a specific API key that unlocks a free 1-month trial.
You can find the configuration tutorial here.