On Lingo.dev, teams configure localization engines: Stateful translation APIs with glossaries, brand voice rules, per-locale model chains, and AI quality scoring, and then call them via API, CLI, CI/CD, or MCP.
Hey Product Hunt 👋
Thanks for hunting us. Excited to be here!
Two things changed at once in localization engineering
Teams are switching from legacy machine translation and translation vendors to LLMs. That part is visible. The invisible shift: LLMs without domain context don't localize, they just produce text that looks translated.
LLMs made translation fast. They also made it stateless.
Raw LLMs have no memory of previous decisions. The same term gets three different translations across the product. The results compound silently.
This is terminology drift. And it's the gap between translation and localization.
Translation converts text. Localization makes it consistent, domain-aware, and terminologically correct across every locale, every release. That gap is an engineering problem. And nobody had built the infrastructure for it.
Until lingo.dev v1.
What we learned from processing 200,000,000+ words:
We started at a hackathon in 2023. Won "Best DevTools." Spent 2024 building open-source localization tooling with select early users, design partners, customers, and our Discord community.
By 2025, we’d processed 200M+ words and teams at Mistral, Solana, SoSafe, and Cal.com were running localization through our infrastructure.
During this time, we learnt that every team hit the same wall. LLMs translated fast. But terminology drifted across releases. The model had no memory of previous decisions. Each request started from zero.
The missing piece was never better models. It was the context pipeline around the model.
The research that shaped this:
Recently, we published a study: retrieval augmented localization (RAL), injecting glossary terms into the LLM's context at inference time - reduced terminology errors 16.6–44.6% across five LLM providers and five European languages. 42,000+ quality judgments in our published research.
The finding that mattered most: Mistral models with a 72-term glossary approached Google Gemini's raw quality at a fraction of the per-token cost.
Turns out, Localization quality is a function of configuration, not model spend.
Read the research → https://lingo.dev/research/retri...
What v1.0 ships:
Teams create stateful localization engines on Lingo.dev, configure it once, and call it from anywhere:
- Glossaries: map source terms to target translations per locale pair, injected at inference time on every request
- Per-locale model chains: ranked fallback across providers; swap models between releases without touching a single glossary term
- Brand voice and instructions: define tone per locale, set rules for specific patterns (quotation marks, elision, spelling conventions)
- AI reviewers: one model translates, another scores by dimension; cross-model quality measurement at scale
- API, CLI, CI/CD, MCP: synchronous API, async jobs with webhook delivery, npx lingo.dev@latest run, GitHub integration that opens PRs with translations on every push.
Where this doesn't work:
One-off translations with no consistency requirements.
Teams that prefer human-led review workflows may find legacy platforms a better fit.
Try it today:
Create your first localization engine in under 3 minutes at https://lingo.dev/
Before we go, there are a few things we're genuinely curious about from this community:
1. If you've localized a product into 3+ languages, what broke first - speed, quality, or consistency? (We have a hypothesis, but I'd love to know your experience.)
2. If you're a developer who's tried wiring LLM translation into a CI/CD pipeline, what did you have to hack around that you wish was just... handled?
We've been building in public since 2023, first with select few users, then with our Github community, and now with you all.
Happy to go deep on the RAL research, the engine architecture, glossary injection mechanics, whatever's interesting.
Drop a comment or hit us directly!
Brand voice rules and glossaries is the part most translation tools skip. How do you handle the conflict when brand voice wants formal but a locale prefers casual?
I’ve been using Lingo for a long time. As a paying user, the best part is that I’ve almost forgotten Lingo is even there, yet I’m always confident it will handle translations accurately. It has become seamlessly integrated into our existing CI/CD workflow.
Indie builder question — at what point in the journey do you think a consumer app should start localizing? English-only right now with our nutrition app but EU is on the radar and I can't tell if it's a 1k-user problem or a 100k-user problem.
Pricing is a bit confusing. Can you guys give any indication here or on your website?
Friendly feedback: pricing page is slightly broken on mobile (from iPhone and navigated from producthunt).
Very interesting. Whenever people asked us if we can support localization, I’d say no.
There was no way to make sure that we got it right.
Who’s testing for the accuracy of tone, style, and context?
Google Translate is a complete joke in some cases.
What subset of these issues does your platform solve?
Good stuff overall!
Lingo.dev is an amazing product. I remember going through localization at indeed and it was nightmare.
I've used Lingo even before when they named replexica, in short, I had a quite happy experience
It’s such a cool idea, how are you guys marketing this to build a userbase?
Congrats to the Lingo.dev team on the launch. I stumbled across it a while back and it’s been a genuinely great experience since then. Super smooth dev experience, very little friction, easy to drop into an existing workflow, and overall just feels thoughtfully built. Even the agents seem to enjoy using it. And of course, I’m quietly hoping the free tier stays around 😄
I've used Lingo even before this version (v1) and specifically the compiler and engine. Both were super helpful and made the explicit use of I18n not needed. Despite having some bugs, it was totally worth it!
About Lingo.dev v1 on Product Hunt
“Localization engineering platform for consistent translation”
Lingo.dev v1 launched on Product Hunt on May 7th, 2026 and earned 215 upvotes and 27 comments, placing #4 on the daily leaderboard. On Lingo.dev, teams configure localization engines: Stateful translation APIs with glossaries, brand voice rules, per-locale model chains, and AI quality scoring, and then call them via API, CLI, CI/CD, or MCP.
Lingo.dev v1 was featured in API (98.1k followers), Developer Tools (512.1k followers), Artificial Intelligence (468k followers) and GitHub (41.2k followers) on Product Hunt. Together, these topics include over 190.1k products, making this a competitive space to launch in.
Who hunted Lingo.dev v1?
Lingo.dev v1 was hunted by Garry Tan. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Want to see how Lingo.dev v1 stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.