Fast checks • No account required

Robots.txt Tester

Enter any URL to check whether it is Allowed or Blocked by that site’s robots.txt for Googlebot (or a custom user-agent). We show the matched rule and the robots.txt source.

robots.txt tester allowed or blocked googlebot rules seo debugging
Test a URL against robots.txt
Google-style longest match • Tie = Allow
Reset
Tip: If you only have a domain, paste example.com and we’ll assume HTTPS. For best accuracy, include the full URL path you want to check.
What robots.txt actually does (plain English)
Beginner-friendly

A robots.txt file tells search engines which parts of a website they are allowed or not allowed to crawl. It does not control rankings directly — it only controls access.

Important: If a URL is blocked by robots.txt, Google usually cannot crawl or understand the page’s content.

This tool checks whether a specific URL is allowed or blocked, and shows the exact rule that matches.

Common robots.txt rules explained
Examples
User-agent: *
Applies to all bots, including Googlebot.
Disallow: /
Blocks everything on the site. Often used temporarily during development.
Disallow: /admin/
Blocks only URLs that start with /admin/.
Allow: /
Explicitly allows crawling. Usually unnecessary unless overriding a broader block.
Allow: /images/
Allows a subfolder that would otherwise be blocked.
Why a page might be blocked unexpectedly
Very common
Tip: This tool uses a Google-style rule match. The longest matching rule wins, and ties favour Allow.
Allowed vs indexed (they are not the same)
Common confusion

A URL being allowed by robots.txt does not guarantee it will be indexed. It only means Google can crawl it.

Allowed
Google is permitted to crawl the URL.
Indexed
Google has crawled the page and decided to include it in search results.
If a page is allowed but not indexed, the issue is usually content quality, internal links, or duplicate signals.
What robots.txt does NOT do
Myths
To remove indexed pages, you need noindex, removals in Search Console, or proper authentication.
What to check if a URL is blocked
Next steps
After fixing robots.txt, Google may need time to re-crawl the site. You usually don’t need to request indexing immediately.
What to check next
Recommended tools
These tools are linked in a logical order to help diagnose indexing problems step by step.
Final takeaway
Summary

robots.txt is a powerful file, but small mistakes can block entire sections of a website without anyone noticing.

If Google can’t crawl a page, it can’t rank it — no matter how good the content is.