Featured onSeptember 28th, 2025
Check if ChatGPT, Claude, and Google can crawl your site in 10 seconds. Find and fix robots.txt issues blocking AI visibility.
Comment highlights
Ahh nice one! Just got the test done for AdAmigo.ai, and we are good to go! Nice launch - just upvoted :)
This is an excellent marketing move to get someone's email address! :D You are a marketing genius! :D :)
Been wondering if my site’s actually visible to all these new AI bots—this is such a smart shortcut! Does it also suggest exact fixes for robots.txt issues, or just flag them?
Lets me be the First Comment!
This is such a clever tool. As someone who’s been tweaking robots. txt files manually, this instantly checks AI crawler access—huge time-saver.


TIL: Stackoverflow allows all crawlers (by accident)
I'd expect Stackoverflow to
✅ allow all search crawlers
🛑 block all LLM training material crawlers
when you check: all crawlers allowed (!)
but that's not the whole story
they do have a robots.txt w/ blocks (!)
the robots.txt says:
🛑 do not crawl any pages ("/")
it even says explicitly:
🛑 not for search
🛑 not for ai training
and it applies to any user agent, any crawler ("*")
but why does the crawler check then say everything is green?
the robots.txt is served w/ status 418
418 originated as a joke status code:
I'm a teapot
but that status code is in the 400-499 range (!)
and the RFC for robots.txt (9309)
says that if the robots.txt is served with a status in the 400-499 range
a crawler may access any resources
as if there is no robots.txt at all
and that's why crawler check currently turns up green for Stackoverflow
not sure what is right for Stackoverflow
nor if they have robots.txt set up like they intended
(I'd expect them to block crawlers for ai training but allow crawlers for search)
but it shows:
getting robots.txt right can be quite tricky