Three tools. One platform. Complete developer adoption intelligence. Time-to-value tracking, screen-recorded evaluations with real ICP-matched devs, and an AI engine that tells you exactly what's broken and how to fix it. The intelligence compounds. You've watched the dashboards. Developers still drop off. Now you'll know why.
This looks really useful. The screen-recorded evaluations with real devs are a great idea, getting unfiltered first impressions before launch sounds way better than guessing. Curious about the time-to-value tracking too, how do you define 'conversion'? Is it something I configure or does it detect it automatically? I'm building a desktop app with Electron so also wondering if this would work for that or if it's web-only.
Do you track where developers get stuck during onboarding or is it more focused on the overall experience? Congrats on the launch!
Here's an example of a developer adoption score!
Drop one script tag into your site. That's it.
bfd.js tracks how developers actually move through your docs and product—pageviews, clicks, scroll depth, time on page, rage clicks, copy events, JS errors, and form interactions. No fluff. No PII. Sensitive fields and params are automatically redacted.
Pair it with screen-recorded evaluations from ICP-matched developers in our 6k+ network, and you stop guessing what's broken. You see it.
Three products. One goal: turn drop-offs into adoption.
→ JS tracking script — measures TTV and the full developer journey → Screen-recorded evaluations — real developers, your exact ICP, paid to do a thorough job → AI recommendations engine — tells you what's broken and what to fix. Gets smarter over time.
Built for dev tool teams who are tired of shipping docs into a void.
Do you find a difference in data quality between the developers that are being paid to test a tool versus actual users of the dev tool? For instance, I know that my behaviour is different when I'm filling out a survey for a contest versus one that I genuinely am interested in.
Congrats on the launch! 🎉 The insight about user interviews being unreliable is spot on — people describe a smoothed-out version of what actually happened, not the real confusion. Screen recordings of unscripted first experiences are a completely different signal. About to launch OceanMind, an AI-powered breathwork iOS app, and the onboarding drop-off problem is exactly what keeps me up at night. Curious whether the platform works for mobile app onboarding too, or is it primarily focused on web-based dev tools and SDKs? The ICP-matched developer evaluation piece is the part I find most compelling — getting real first impressions from people who match your actual user before you’ve burned your launch day on guesswork.
the "how developers actually use it" angle is underserved - most UX tools optimize for non-technical users and then try to bolt on developer modes. what kind of signals do you surface that typical session recording misses? I am thinking things like rage clicks on APIs or copy-pasting error messages.
I love the data-driven approach. Feedback to the team becomes easier to quantify and break down into work that shows actual results. Super helpful.
Built for Devs is the result of productizing a service that drove the greatest results I've ever seen in my career.
I brought real developers in to screen record themselves naturally trying a client's dev tool—no scripts, no hand-holding. Just honest, unfiltered first experiences. Those recordings shaped findings reports that told founders exactly what was broken and what to fix. Red flags for what needed to be addressed first. Quick wins for low-effort opportunities. The full story of how developers experienced every stage of the product.
The results were unlike anything else I'd produced.
One client fixed a handful of friction points and hit Product Hunt #1 product of the day and week. Another completely pivoted their market using insights from 10 developer segments. The recordings alone kept roadmaps full for months.
The service was such a success that it became a platform—and then some.
Dev tool founders now get continuous journey tracking so they always know where developers slow down and disappear. Real developer evaluations matched to their exact ICP—screen recorded, unscripted, and incredibly revealing. And an AI engine that analyzes every data point including video, drafts findings reports richer than anything I could produce by hand, and recommends exactly what to fix.
Not a one-time audit. A living system that gets smarter over time. The more data it collects, the more precise the recommendations get. Founders stop guessing. They know exactly what's broken and exactly what to do about it.
Built for Devs is developer adoption intelligence for dev tools. It shows founders exactly where developers drop off, why they leave, and what to fix—continuously.
If you're a developer reading this—those screen recordings don't record themselves. Built for Devs pays developers to try dev tools. No meetings, no scripts, no hand-holding. Just your honest first experience with a product.
HOW IT WORKS Built for Devs leverages a tracking script that captures the users entire journey—from first visit through interaction—tracking pageviews, clicks, form submissions, time spent, errors, rage clicks, and scroll behavior to measure the dev tools TTV (time to value). It uses that data to provide a fully-detailed developer journey with your touch points mapped to the right stage. It leverages everything that it learns to constantly provide recommendations for improvement.
When developers use the dev tools during an evaluation, the system processes three layers of data: a full transcript of what they say, video analysis revealing where they struggle or get confused, and interaction data captured by the tracking script that records navigation, clicks, and time spent. AI synthesizes these three layers into a findings report, identifying patterns in how developers approach problems and where their product could improve the experience.
This looks really useful. The screen-recorded evaluations with real devs are a great idea, getting unfiltered first impressions before launch sounds way better than guessing. Curious about the time-to-value tracking too, how do you define 'conversion'? Is it something I configure or does it detect it automatically? I'm building a desktop app with Electron so also wondering if this would work for that or if it's web-only.