The first AI-native backend engineered to power massive-scale consumer applications. Easily scale from prototype to millions. Automated MLOps frees you from maintenance. Deploy no-code experiments instantly. Battle-tested through work with NVIDIA, Google, Xbox
No way, AI products designed specifically to help apps grow? That’s genius—my side project could actually level up with something like this. How customizable are the integrations?
I'm super excited for the launch. I’ll try to explain in my own words how it works:
Step 1: Use our SDK to represent your AI logic (new or existing) as a graph. This is how you would explain your app logic to someone on a whiteboard. We tried alternative methods but found that this one works best.
Step 2: Test and improve it with a small set of users. Our Runtime will take a lot of the work off your shoulders by automatically optimizing performance, proposing hints and tips, and providing you with detailed analytics and telemetry. You no longer need to worry about many tasks that previously would have taken weeks or even months.
Step 3: Launch and scale confidently, knowing you can grow with your users' demand. We'll make it as easy as possible for you to add new features quickly without breaking old ones, run A/B tests to determine what works best, and ensure you always receive the best cost for your AI models and services.
@audi_liu Hey! Congrats on going live, upvoted, we launched today as well and your feedback would help.
Hi Product Hunt! I'm Audi, one of the Product Managers at @Inworld
If I could compress my life experience into a product, it would be Inworld Runtime.
My life has been a series of rapid-fire learning loops.
From adapting as an immigrant, to driving TikTok's explosive growth, and now to building the runtime to empower every consumer app, the lesson was always the same
The teams that learn and iterate fastest have the ultimate edge.
Today, that edge is more critical than ever.
Generative AI enables us to create 10x ideas and 100x software.
This presents the defining challenge and opportunity of our time:
Who can learn from and iterate on these ideas the fastest?
We built Inworld Runtime to give you that edge - It’s the first AI runtime engine designed to help you build fast with Adaptive Graphs, scale fast with Automated MLOps, and iterate fast with Live Experiments.
As the PM for Live Experiments, my favorite part is watching a team go from 'we think' to 'we shipped'.
Live Experiments give you the power to:
One-Click A/B Test: Instantly deploy tests for models, prompts, or configs with no code changes or redeploys required.
Concurrent Experiments: Run hundreds of experiments simultaneously, testing variants in parallel for faster discovery.
Dynamic User Targeting: Target experiments by user segments, devices, or contexts for personalized testing and insights.
Smart Rollout: Automatically scale winning variants based on metrics like engagement, with safe rollback options.
The best feeling is waking up knowing I learned something that can help make a difference today.
Hola Product Hunt! :) Excited to see you guys again.
I joined Inworld because I believe in the mission to democratize AI for all. Today marks a massive leap toward that vision.
Runtime is a labor of love from folks who believe AI belongs in everyone's hands. It's an AI-native backend that handles the complexity in scaling and infrastructure so you can focus on what matters, your users.
Part of the reason we built this is because we were tired of seeing brilliant ideas die in maintenance hell. Now every builder gets infrastructure that just works, whether they have 10 users or 10 million and I'm genuinely excited to see what unfolds.
Congrats on the launch! The Adaptive Graphs and Live Experiments sound ideal for rapid iteration.
Does Runtime natively support popular LLM providers (OpenAI, Anthropic) and self-hosted formats like ONNX or TorchServe, or will we need adapters? 🤔
We can't wait to see all the cool stuff the builders will create!
I work mostly on the GTM side at Inworld, so I spend a lot of my time talking to the builders who are pushing the limits of consumer AI.
Over the past few years, I’ve had many conversations with teams who get their product working but hit a wall when they try to launch. The patterns are the same as Kylan mentions above (difficult/expensive to scale, maintenance eats up time from innovation, experimentation is slow).
That’s why I’m so excited about what our amazing team has built. Runtime takes care of the heavy lifting so our users can focus on the magic of their product. I can't wait to see what consumer applications get built!
And a special shoutout to @andreasassad, who has been working hard behind the scenes to maximize Runtime's reach.
Hi Product Hunt! We're back! I'm Kylan, CEO and co-founder of @Inworld.
Today we're releasing Inworld Runtime, the first AI runtime engineered to scale consumer applications. All self-serve Runtime usage is FREE through August, so now's the perfect time to build.
We initially built the Inworld Runtime to tackle our own headaches serving large gaming and media partners like NVIDIA, Google, Xbox, Disney, Niantic, and NBCUniversal, and we are now opening up Runtime to everyone.
Over the last few years working hand-in-hand with consumer builders we learned that there are three most common problems holding consumer AI adoption back:
“I vibe-coded a prototype in 4 hours. Why does it take us 4 months to launch?”
“After a new feature launch, my team spends 6 months on maintenance and bug fixes. We can only do 2 features per year.”
"We just spent months on a feature that no one wanted and were wrong about what drives our users to spend more time. We need to run more tests."
Runtime was built to solve these challenges. It works easily with your current ML stack and integrates with your favorite models through a single interface and API key, giving you instant upgrades like:
Adaptive Graphs: vibe-friendly and production-ready SDKs with pre-optimized nodes for every model type, auto-scaling graphs, and smart edges; built in C++ for speed.
Automated MLOps: auto-captured telemetry, automated workload management, live metrics suites, and managed custom model training/hosting.
Live Experiments: one-click A/B tests, concurrent experiments, dynamic user targeting, and smart rollout. No code changes, instant insights.
Leading consumer AI applications are already using Runtime and its components. Status by Wishroll scaled from prototype to 1 million users in 19 days with over 20x cost reduction, Little Umbrella now ships new AI games monthly instead of yearly, and Bible Chat reduced their AI voice costs by 85% and scaled their voice features.
P.S. For teams that are spending more than $10K/month on AI or have raised more than $3M, we'll cover your first $20K of Runtime usage and give you dedicated integration support. If that's you, just contact our team at [email protected].
No way, AI products designed specifically to help apps grow? That’s genius—my side project could actually level up with something like this. How customizable are the integrations?