Ollang is the AI language execution layer for localization across web, apps, video, audio, and documents. Use MCP to let AI agents run workflows, SKILLS for reusable agent actions, the SDK to scan and apply translations, and the API to build end-to-end localization pipelines. One platform for multimodal localization and production-ready developer integration.
We've been quietly building something we believe the developer community has been missing: A proper localization layer for the agentic era
Why we built this:
Every AI agent, every app, every workflow is still English-first by default. Getting to 240+ languages means stitching together 5+ APIs, managing file conversions, handling dubbing, subtitles, and i18n files separately, all while keeping quality consistent. It's a mess 🤯.
🎁 What Ollang MCP Skills API & SDK does:
→ One API call to localize any file type — video, audio, DOCX, PDF, SRT, JSON etc.
→ Native MCP/SKILLS integration — Claude Code, Cursor, Cline, Codex and 15+ agents can localize files directly from their workflow
Very useful service. We actually have a project that we want to translate into the top 10 main languages.
By the way, how is your quality for translating less common languages? For example, Swedish? Most services usually struggle with that...
An language execution layer sitting between enterprise systems and AI models is an interesting architectural bet. Would want to understand how it handles latency at scale - most "AI layer for enterprise" things work fine in demos and then fall apart when you actually route production traffic through them.
If I want to make my app available in whatever language users want, is this something Ollang can help with? I've noticed llms are good at some languages, and less fluent in others, which can make for a non-uniform experience for users
Long time Ollang user! Congrats on the launch @mazula95 !
I’m curious, how do you handle iteration and feedback loops? Like when translations get revised multiple times (by PMs, local teams, etc.), does the agent learn from those edits over time or is each run stateless?
the "english-first by default" framing is spot on. every ai workflow i've seen just assumes english and offloads the rest to a separate team or pipeline.
curious how the SKILLS layer works in practice across different agents, is it framework-agnostic or does each one need its own wrapper?
So proud of you for shipping this! 🎉 Honestly, it is such a clever move. To answer your questions: Cursor and Claude Code are definitely the most critical agent integrations for my workflow right now. As for file types, handling JSON for i18n directly within the workflow without breaking the structure is a lifesaver. Congratulations! 🚀
Love this! feels like something that should’ve existed already.
Curious about one thing: how do you handle quality consistency across very different modalities (e.g., subtitles vs. dubbed audio vs. structured JSON)? Is there a shared evaluation/QC layer, or does it vary per file type?
Localization is one of those things that gets shoved to the end of every sprint and done badly. The MCP + SKILLS approach for agent-driven workflows is interesting - what does a typical localization workflow look like when the agent runs it end-to-end? I'm curious how it handles context (strings that need different translations depending on UI placement) vs just raw key-value substitution.
Localization is one of those things that always gets pushed to "later" and then becomes a nightmare when you finally need it. How does it handle context-dependent translations? Like in our app, the word "network" means something very specific - does it learn domain-specific terminology or do you need to manually define a glossary?
This is exactly the kind of product that makes you think 'why didn't this exist before?' Multimodal localization with MCP + Skills + SDK in one platform is genuinely elegant architecture. The developer experience looks clean, and the enterprise angle is well thought out. Congrats on the launch @mazula95 — the future of AI-native localization is in good hands. 🚀
Congrats on the launch, Aziz!
Localization has always been one of those things that feels like it should be a one-liner but ends up being 5 different tools duct-taped together. Can't wait to finally having something work natively inside agent workflows instead of as a separate step is a big deal! Nice work 🚀
Hey Product Hunt 👋
AI agents are everywhere, but localization is still fragmented, manual, and painfully multi-step.
We built Ollang MCP, Skills, and SDK to fix that.
→ One API to localize any file type → Works directly inside your agent workflows → Built for real-world complexity (video, audio, docs, i18n — all in one flow)
Curious, what’s the most painful part of localization in your current stack?
About Ollang DX on Product Hunt
“The AI Language Execution Layer for Enterprise”
Ollang DX launched on Product Hunt on March 30th, 2026 and earned 172 upvotes and 26 comments, placing #8 on the daily leaderboard. Ollang is the AI language execution layer for localization across web, apps, video, audio, and documents. Use MCP to let AI agents run workflows, SKILLS for reusable agent actions, the SDK to scan and apply translations, and the API to build end-to-end localization pipelines. One platform for multimodal localization and production-ready developer integration.
Ollang DX was featured in API (98k followers), Developer Tools (511k followers) and Artificial Intelligence (466.2k followers) on Product Hunt. Together, these topics include over 161.8k products, making this a competitive space to launch in.
Who hunted Ollang DX?
Ollang DX was hunted by fmerian. A “hunter” on Product Hunt is the community member who submits a product to the platform — uploading the images, the link, and tagging the makers behind it. Hunters typically write the first comment explaining why a product is worth attention, and their followers are notified the moment they post. Around 79% of featured launches on Product Hunt are self-hunted by their makers, but a well-known hunter still acts as a signal of quality to the rest of the community. See the full all-time top hunters leaderboard to discover who is shaping the Product Hunt ecosystem.
Want to see how Ollang DX stacked up against nearby launches in real time? Check out the live launch dashboard for upvote speed charts, proximity comparisons, and more analytics.
Hey Product Hunt! 👋
We've been quietly building something we believe the developer community has been missing: A proper localization layer for the agentic era
Why we built this:
Every AI agent, every app, every workflow is still English-first by default. Getting to 240+ languages means stitching together 5+ APIs, managing file conversions, handling dubbing, subtitles, and i18n files separately, all while keeping quality consistent. It's a mess 🤯.
🎁 What Ollang MCP Skills API & SDK does:
→ One API call to localize any file type — video, audio, DOCX, PDF, SRT, JSON etc.
→ Native MCP/SKILLS integration — Claude Code, Cursor, Cline, Codex and 15+ agents can localize files directly from their workflow
Begin like this now:
or like this:
What we'd love your feedback on:
- Which agent integrations matter most to you?
- What file types are critical for your localization workflow?
- Would you use this for a personal project, startup, or enterprise?
We're answering every question today. Drop your hardest localization challenge below, and we'd love to solve it with you.
Get started free → ollang.com