Product Thumbnail

Context.dev

One API to scrape, enrich, and understand the web.

API
Artificial Intelligence
Data

Hunted byYahia BakourYahia Bakour

Context.dev (previously Brand.dev) gives your AI agents and apps real-time access to structured web data, no brittle scraping infrastructure needed. Scrape any URL as clean markdown or HTML, extract brand data (logos, colors, fonts, socials) from any domain, crawl sitemaps, resolve transaction descriptors, and more. Typed SDKs for TypeScript, Python, and Ruby. Trusted by 5,000+ businesses including Mintlify, Daily.dev, Ferndesk.com, and more. Most teams integrate in under 10 minutes.

Top comment

Hey PH! Yahia here, founder of Context.dev (formerly Brand.dev).

We've been building this API for a while now and the rebrand reflects where the product has grown; from brand data into a full web context layer. One API to scrape, enrich, and understand any website.

The problem we kept seeing: developers waste weeks stitching together scrapers, enrichment tools, and data providers. We wanted one clean API that just works.

Would love your feedback. Happy to answer any questions!

Comment highlights

this will help folks increase conversion in their onboarding. Personalization is great and stops dropoff during the process.

Very good! Some time back I needed to get the docs for different specialised tools, and in most of the cases their docs live very strangely for LLM to understand, so I built my own scraper, and then built a library of MD files and so on....

Ok, and I can see that you can read zendesk docs!

What's your approach to handling rate limits when scraping at scale across multiple providers simultaneously? Really solid product, well done on the rebrand.

Interesting approach.

Feels like the hard part here isn’t scraping, but turning that data into something actually useful.

Curious what typical use cases look like in practice.

The best product for getting anything from the internet for your product! Congrats, Yahia!

Congrats on the launch :) Been building something that scrapes 16 different data sources per domain and the hardest part is always the cascade when one provider fails and you need to fall back without killing latency.

This looks like it could simplify a lot of that. How do you handle sites behind Cloudflare or heavy JS rendering? That's where most of my pain is.

This can be a real time saver, i'm a developer and i often end up to writing a different scraper each time. Having a standardized API to extract content from websites is a really interesting solution.

Developers waste so much time stitching together scrapers, enrichment tools and retries — having all of that collapsed into one clean API is a genuine time-saver. Love the rebrand too, "Context" nails what it actually does now 🙌

interesting direction. tools like this usually solve the data collection part well, but teams often struggle with structuring and actually using the data after. curious how you’re thinking about that