Product Thumbnail

Mosaic

Zapier for Video Editing

Artificial Intelligence
Marketing automation
Video

Mosaic allows you to automate any video edit — from Rough Cuts to Motion Graphics and anything in between. Our node-based canvas is an interface to setup video editing workflows that scale. Once created, these can be reused as templates or triggered programmatically via API or event-based triggers. From any step along the way, seamlessly export your timeline back into traditional tools like Premiere Pro / Final Cut / DaVinci Resolve or to popular Media Asset Management softwares.

Top comment

Hey ProductHunt!

I'm Adish, one of the co-founders of Mosaic (https://mosaic.so). Mosaic lets you create and run your own multimodal video editing agents in a node-based canvas. It’s different from traditional video editing tools in two ways: (1) the user interface and (2) the visual intelligence built into our agent.

While most AI video editors today are attempts at retrofitting existing timeline editors with a chat copilot, we realized that the chat UX has limitations for video: (1) the longer the video, the more time it takes to process. Users have to wait too long between chat responses. (2) Users have set workflows that they use across video projects. Especially for people who have to produce a lot of content, the chat interface is a bottleneck rather than an accelerant.

The result: a node-based canvas where you can create and run your own agentic video editing workflows. This paradigm shift redefines what it means to be a "non-linear editor" and offers a scalable content engine that allows you to define workflows that can be reused as templates or triggered programmatically via API or event-based triggers.

Each node in the canvas represents a video editing operation and is configurable with natural language prompts, so you still have creative control. You can also branch to run edits in parallel, creating multiple variants from the same raw footage to A/B test different prompts, models, and workflows. In the canvas, you can see inline how your content evolves as the agent goes through each step.

The idea is that canvas will run your video editing on autopilot, and get you 80-90% of the way there. Then you can adjust and modify at a more granular level in an inline timeline editor. We also support exporting your timeline state as an XML back out to traditional editing tools like DaVinci Resolve, Adobe Premiere Pro, and Final Cut Pro or to popular Media Asset Management softwares.

Our use of multimodal AI to build visual understanding and intelligence is a core platform feature. This gives our system a deep understanding of video concepts, emotions, actions, spoken word, light levels, shot types. We’re supplementing this with our own computer vision + video processing pipeline, which includes techniques like saliency analysis, audio analysis, and determining objects of significance—all to help guide the best edit.

These are things that we as human editors internalize so deeply we may not think twice about it, but reverse-engineering the process to build it into the AI agent has been an interesting challenge.

Use cases for editing include:
1. Removing bad takes or creating script-based cuts from videos / talking-heads
2. Repurposing longer-form videos into clips, shorts, and reels (e.g. podcasts, webinars, interviews)
3. Creating sizzle reels or montages from one or many input videos
4. Creating assembly edits and rough cuts from one or many input videos
5. A/B testing different hook, CTA permutations and variants
6. Optimizing content for various social media platforms (reframing, captions, etc.)
7. Dubbing content with voice cloning and lip syncing
8. Generating *editable* motion graphic animations or cinematic captions

We also support generative workflows such as:
1. Creating new AI Avatar / UGC content
2. Creating new cartoon / animated content
3. Adding contextual AI-generated B-Rolls to existing content
4. Modifying existing video footage (e.g. censoring content, changing lighting, applying VFX)

We're giving everyone in the ProductHunt community a 20% discount if you sign up during our launch week! You can try it today at https://edit.mosaic.so and our API and educational docs are at https://docs.mosaic.so/. We’d love to hear your feedback!

Comment highlights

Love the idea, it sounds powerful. The reusable workflow and autopilot angle feels like the strongest part.

Can I save a workflow and automatically run it every time I upload a new video, for example, turn every new podcast into 5 ready-to-post clips without touching anything?

This is a really powerful shift from “editing videos” to “designing video systems”

Curious — how do you handle consistency across outputs?

Like when generating multiple variants (A/B tests, reels, etc.), how do you ensure brand voice, pacing, and visual identity don’t drift across different agent workflows?

This is exactly what the YouTube creator workflow has been missing. Right now, the creation pipeline looks like: brainstorm ideas → write script → shoot → edit → publish. The first two steps are getting automated fast (we built TubeSpark to handle ideation and script generation with AI), but editing has always been the manual bottleneck.

The node-based canvas approach makes a lot of sense — especially for creators who produce weekly content with consistent formats. Being able to save workflows as templates and trigger them via API is a game-changer for batch production.

Curious about one thing: how does Mosaic handle b-roll suggestions or cuts based on script pacing? Like if a script has a "pause for emphasis" moment, does the visual intelligence pick up on that?

Congrats on the launch, Adish!

This is what AI meets <> video editing needs to be - Effortless.

I use it for personal memories, my friends use it for legitimate scaled content creation - we all get time back and better videos than we could’ve made ourselves. Don’t need much more.

Have been using and recommending Mosaic for past 3 months, game changer for podcast clipping and motion graphics.

@adishj This is lovely! Can you also include color grading styles and preferences?

As someone who has edited videos for hours and hours, this product cannot be overstated! Congrats @adishj I would love to add this to my video toolkit.

The node-based canvas is the right interface for this. Chat-based video editing works for simple one-shot tasks but falls apart the moment you have a repeatable workflow with multiple steps, branching variants and brand constraints you need to apply consistently across projects.

The A/B testing of hook and CTA permutations from the same raw footage is the use case that jumps out to me. That alone could change how content teams approach high-volume social production.

As a motion designer and Creative Director who works with brand video regularly, the "80-90% of the way there, then you refine" model is how I'd actually want to use this. The XML export back to Premiere, Final Cut and DaVinci is also what makes this feel safe to adopt rather than a walled garden. Curious how the motion graphics node handles brand system constraints, can you feed it a style guide or does it work purely from prompt? Congrats on the launch!

The way we will create videos has totally changed. The fact that if I wanted to create 2 different variations of the video (which also incorporated time for moodboarding and different script logic) – it took days, and now, one single tool can manage in the blink of an eye... that's crazy.