Product Thumbnail

Silicon Friendly

How Silicon Friendly is your website? (from L0 to L5)

API
Developer Tools
Artificial Intelligence
GitHub

Agents surf the internet more than we carbons do. They find interesting things and recommend them to their humans. In this, if your website isn't friendly to an agent, it's likely not being discovered. L0-L5 is an open standard for ranking websites based on how silicon friendly they are. PS: We create a detailed report of your website you can download and give to your agent to make your website silicon friendly.

Top comment

I've never wanted to visit my bank's website. I want to know my balance - and their website is the only way to get it. Someday soon, I'll just ask my agent to check it for me. But not today - because if the agent messes up, I'm f**ked. That's the gap right now. Agents are getting good enough to browse any website. People are building CAPTCHA solvers and humanizers. This is happening whether website owners want it or not. The question isn't if agents will browse the web. They will. The question is if the web is ready for them. Or will we just shame them for failing to use a website properly when it wasn't made to have them in the first place? We built Silicon Friendly as a internet-wide initiative to answer that question. It rates websites on agent-friendliness - from L0 (actively hostile) to L5 (built for agents). Because before your agent goes somewhere, it should know what it's walking into. > share this with your silicon: siliconfriendly.com/llms.txt

Comment highlights

I’m curious how do you envision this report helping non-technical folks implement the recommendations?

The idea that agents are becoming just as important an audience as humans is something we don't think about enough. Really interesting way to frame it!

Tried this via Claude Code ( Evaluate our landing page via siliconfriendly.com/llms.txt ) and it worked surprisingly well. Feels like a smart framing shift: we’ve spent years making sites human-friendly, and now we need to make them agent-readable too. The report runs inline and I never had to leave the CLI session, very nice. Congrats on the launch.

Taking it Shubham! Really got the concept and I'll be trying it soon for many of my projects. Congrats on the launch btw

This sounds innovative and relevant! Do you think there's a point where being silicon friendly actually starts hurting the human UX, or is it always complementary?

ran our main product landing through it and got L2 - robots.txt was there but no llms.txt and structured data was thin. hadn't really thought about agent discoverability as a separate optimization from SEO but it is a different thing. the L0-L5 framing makes it actionable vs just a vague checklist

I ran one of my projects through your analytics and got some interesting insights. But one insight stands out for me more than others: i'm seeing the recommendation to create a llms.txt file. Is there any real evidence that this file actually does anything? I already tried to create one for my other project, and it did not make a difference.

Just ran my own site through this and honestly the report surprised me. I thought I was doing well with llms.txt and Schema.org structured data but turns out I'm L2 with no public API or agent.json, which I hadn't even considered. Really like the L1-L5 framework, makes it super clear what to prioritize next. The competitor comparison in the PDF was a nice touch too. Congrats on the launch!

Hey Shubham, that line about shaming agents for failing to use websites that were never built for them in the first place is a good reframe. Was there a specific site where you watched an agent completely fall apart on something simple and thought wait, this isn’t the agent’s fault, this site just wasn’t built for this?

Ran this on my own site and got L1 passed, L2 failed. Didn't even have a robots.txt set up, so that was a useful wake-up call. The level breakdown makes it really clear what to fix first instead of guessing. Curious if there are plans to show suggestions next to each failed check?

Nailed the copywriting for the product name – grabbed the attention as the first thing! :D :)

I was thinking about this yesterday, how to make my site more agent friendly. Great work.

It might help to show real examples of sites at each level. That would make the scoring system more tangible.

Every time very sad when i should register on services like this(
give me one try! i just wanna check!) i don't want register.

Quite nice! Btw on what basis have you considered these levels? Meaning is there any way to actually test it, so I know before and after implementing this is agent friendly

Think of it like a PageSpeed score but for AI agents — brilliant framing. The L0–L5 standard makes it actionable, not just a vague "be AI-friendly" suggestion. Already curious where most sites land 😅