Product Thumbnail

DubStream by CAMB.AI

Dub live streams in 150+ languages, instantly

Video Streaming
Artificial Intelligence
Live Events

Broadcast your live stream in 150+ languages with real-time voice dubbing. DubStream is trusted by global leaders like MLS and NASCAR. Available via web platform or API. Built on CAMB.AI’s MARS8 voice AI.

Top comment

Hey Product Hunt 👋

For the past few years, we’ve been building real-time voice AI for live sports and global broadcasts.
If you’ve watched multilingual coverage around MLS, NASCAR, Ligue 1+, or the Australian Open, you’ve likely already heard our tech in action.

Today, we’re bringing that same infrastructure to everyone with DubStream by CAMB.

Live events should be global by default. Language shouldn’t be the thing that stops people from tuning in. Subtitles break immersion, and post-production dubbing doesn’t work when the moment is happening now.

So we built CAMB Live with one goal: real audio, translated live.

What makes it different

  • Hundreds of languages spoken languages

  • Voice dubbing (not just captions) — your audiences experience, not read

  • Multi-speaker + emotion preserved

  • Built on our proprietary MARS8 real-time speech model

It works across live streams and broadcasts - sports, news, webinars, creator streams: anywhere latency and quality actually matter.


We’d love feedback from anyone streaming globally, building creator platforms, or thinking about how live content reaches international audiences.

Mamba mentality 🐍

Comment highlights

Voice cloning that preserves the original speaker's identity in real time across 150+ languages is no joke technically. The dialect-level support (LatAm vs Castilian Spanish, Canadian vs Parisian French) is a nice touch too. Are you doing the speech-to-speech translation end to end on MARS8 or is there still a separate STT step in the pipeline?

I'm trying to understand the workflow here. So a broadcaster plugs in a live stream via the web platform or API, picks target languages, and the dubbed audio goes out in real time? What does the experience look like on the viewer side, do they choose a language before the stream starts or can they switch mid-stream?

Congratulations on the launch! I have two tasks related to what you do: 1. Can your service read out an article or note? That is, I provide the text, an example of my video and voice, and it generates a video? 2. Is there live translation for meetings? For example, in Google Meet?

Camb Streams seems like a fun way to bring AI into live content — curious how it feels in real streams and how helpful the AI suggestions are!

Congrats on the launch! Real-time voice dubbing that preserves speaker identity and emotion is a big leap beyond captions. How do you manage latency and quality trade-offs at scale, especially when multiple speakers switch rapidly or when live conditions like crowd noise and crosstalk get messy?

Wow, DubStream by CAMB.AI is amazing! The 150+ languages is mind-blowing. How does the voice AI handle nuanced dialects within a single language? Super curious!

The voice cloning aspect is what sets this apart from subtitle overlays - keeping the original speaker's identity matters so much for sports commentators and live events. Real-time dubbing in 150+ languages without post-production delays is impressive given the latency challenges. How does MARS8 handle rapid-fire commentary like you'd get in a close soccer match? Also really curious about the upcoming lip-sync AI - that'll be the final piece for full immersion.

That’s actually pretty cool, but how do you guys do it? Does the voice still sound like an AI, or is it more human-like? And does it adapt to your own voice?

Congrats on the launch.
I am wondering what's the realistic end-to-end latency from STT → Steam delivery at the end?