From prompt to visual story in seconds. Heywa dynamically builds the right visual experience around your question, so you can browse, compare, and go deeper - without endless tabs or long chat responses.
I’m Milena, founder of Heywa Labs. I’ve wanted to launch this for a long time, and it's a bit surreal to finally share it here.
The origin story is simple: finding answers online is kind of boring. We spend hours a day in beautifully designed, intuitive mobile apps. They’re visual, responsive, easy to move through. And then the moment we want to learn, decide or scratch the curiosity itch, we’re back to either a list of blue links or a wall of chatbot text. It feels outdated.
Heywa is our attempt to make answering a question feel more like using a great app. You ask something - what to cook tonight, is HIIT actually good for you, what is solipsism - and instead of links or a long essay, you get a visual, structured story you can tap through. It helps you refine, it suggests follow-up actions, it lets you choose if you want to rabbit-hole or decide fast.
We're built for everyday questions. The small stuff. The random curiosity at 11pm. The decision you've been putting off. The idea that's been rattling around in your head.
Under the hood, it’s powered by what we call Generative UX. Not just generated content - the interface itself reshapes around your intent. A travel question looks different from a health question. A comparison behaves differently from open exploration. At Heywa Labs, we think this is where AI products are heading: interfaces that adapt to what you’re trying to do, not static boxes with smarter text inside.
We’re early and very open to feedback. Please drop a question below - Heywa and I are around all day to answer 👇
This feels like a really unique way to do research or for studying. Are there ways to configure what sources the answers come from? For instance, if I don't want to see any TikTok videos so that I don't get distracted, is there a setting for that?
congrats on the launch, super awesome to finally get to see what Martin has been working on!
Thoroughly enjoying being part of Heywa and excited for this launch!
Generative UX is an ambitious concept and will become more powerful as we build, learn and iterate - I'm looking forward to defining this concept further with the brilliant team at Heywa Labs
Welcoming any user feedback/opinions for us to grow and improve!
Quick behind-the-scenes note on how Heywa actually works, since a few people asked 👀
One thing we felt strongly about when building this is that prompt engineering shouldn’t be a requirement for getting good answers. Most AI tools expect you to write the perfect prompt. If you don’t, you get a worse result.
With Heywa we tried a different approach: You can explore the answer by tapping through sections, narrow things down with suggested refinements, or take actions directly from the story (like jumping to comparisons, deeper explanations, or practical next steps) without having to keep rewriting your question.
Curious what you all think about this direction. Do you prefer conversational chat interfaces, or something more visual and navigable when you’re trying to figure things out?
If you want to try a few things that show the format well:
• “Best beginner strength training routine”
• “Why do stock markets crash”
• “How to host a dinner party for 8”
• “Best European train journeys”
Would love to hear what queries you throw at it.
Congrats on the launch, Milena! The "Generative UX" framing is really compelling — the interface reshaping around intent rather than just generating smarter text feels like the right direction. Quick question: for SEO/content discovery use cases (e.g. "best cafes in Lisbon"), are you indexing your own crawled content or pulling from existing search APIs? Curious how fresh/accurate local results are vs something like Perplexity.
Hi, congratulations on the launch. It's a really cool idea. I would expand it so that you can upload your photo and download stories separately, so that you can use them on social media later.
Wow Milena! Love the concept and trying it out is so cool and easy. I'm sure many content creators will take the most of it. All the best here!!
Lovely to see our work out in the world :)
From the design perspective one of the biggest challenges in developing heywa has been creating a system and logic that lets us create and tell a good quality, visual story to (almost) any question.
As Milena mentioned, Generative UX is our name for approach to this. It's about figuring out what the user wants, deciding the best way to tell a story that answers that query, then deciding how to display each step in a way that flows nicely and gets to the point.
We've started out focussing on the story format because it's a constrained canvas. We can refine and improve our approach without getting drowned in the scale and complexity of a full webpage or app (where I think a lot of products are falling down at the mo). Once we've got that nailed, I'm looking forward to introducing more interactivity and variety!
Love this notion that prompt engineering is a UI failure. Especially for a visual user the idea of this is amazing! I can see how this leads to higher conversion and more effective user outcomes. Super excited to see how this evolves 🚀🚀
Super excited to be part of this launch!
Heywa is super interesting and challenging to work on. One of the most interesting technical challenges was orchestrating all the different sources together into a engaging, truthful answer.
A single user query gets decomposed into many parallel sub-queries across multiple retrieval sources, MCP tool integrations, and image sources, then the results get synthesised back into a coherent, enriched answer with relevant images. Not easy to do!
Getting all of that to stream back to the user in real-time while an LLM planner dynamically decides which tools and sources to invoke was a genuinely hard problem. Really excited to finally share what we've been building!
Congrats on the launch! - qq, how does heywa decide which structure a story should have (cards, comparisons, steps etc) for different types of questions?
Is it limited to only one picture per answer or there could be more. Also, it's only pictures, not videos, right?
Hey Product Hunt 👋
I’m Milena, founder of Heywa Labs. I’ve wanted to launch this for a long time, and it's a bit surreal to finally share it here.
The origin story is simple: finding answers online is kind of boring. We spend hours a day in beautifully designed, intuitive mobile apps. They’re visual, responsive, easy to move through. And then the moment we want to learn, decide or scratch the curiosity itch, we’re back to either a list of blue links or a wall of chatbot text. It feels outdated.
Heywa is our attempt to make answering a question feel more like using a great app. You ask something - what to cook tonight, is HIIT actually good for you, what is solipsism - and instead of links or a long essay, you get a visual, structured story you can tap through. It helps you refine, it suggests follow-up actions, it lets you choose if you want to rabbit-hole or decide fast.
We're built for everyday questions. The small stuff. The random curiosity at 11pm. The decision you've been putting off. The idea that's been rattling around in your head.
Under the hood, it’s powered by what we call Generative UX. Not just generated content - the interface itself reshapes around your intent. A travel question looks different from a health question. A comparison behaves differently from open exploration. At Heywa Labs, we think this is where AI products are heading: interfaces that adapt to what you’re trying to do, not static boxes with smarter text inside.
We’re early and very open to feedback. Please drop a question below - Heywa and I are around all day to answer 👇
Milena 💚