Enter any URL to check whether it is Allowed or Blocked by that site’s
robots.txt for Googlebot (or a custom user-agent). We show the matched rule and the robots.txt source.
robots.txt testerallowed or blockedgooglebot rulesseo debugging
Test a URL against robots.txt
Google-style longest match • Tie = Allow
What robots.txt actually does (plain English)
Beginner-friendly
A robots.txt file tells search engines which parts of a website
they are allowed or not allowed to crawl.
It does not control rankings directly —
it only controls access.
Important: If a URL is blocked by robots.txt, Google usually
cannot crawl or understand the page’s content.
This tool checks whether a specific URL is allowed or blocked,
and shows the exact rule that matches.
Common robots.txt rules explained
Examples
User-agent: *
Applies to all bots, including Googlebot.
Disallow: /
Blocks everything on the site.
Often used temporarily during development.
Disallow: /admin/
Blocks only URLs that start with /admin/.
Allow: /
Explicitly allows crawling. Usually unnecessary unless overriding a broader block.
Allow: /images/
Allows a subfolder that would otherwise be blocked.
Why a page might be blocked unexpectedly
Very common
A broad Disallow: / rule left over from staging
Folder-level blocks affecting important pages
Overlapping rules where the longest match wins
Blocking parameters or filtered URLs by accident
Tip: This tool uses a Google-style rule match.
The longest matching rule wins, and ties favour Allow.
Allowed vs indexed (they are not the same)
Common confusion
A URL being allowed by robots.txt does not guarantee it will be indexed.
It only means Google can crawl it.
Allowed
Google is permitted to crawl the URL.
Indexed
Google has crawled the page and decided to include it in search results.
If a page is allowed but not indexed, the issue is usually
content quality, internal links,
or duplicate signals.
What robots.txt does NOT do
Myths
It does not remove pages from Google
It does not act as a noindex directive
It does not improve rankings
It does not secure private content
To remove indexed pages, you need noindex,
removals in Search Console, or proper authentication.
What to check if a URL is blocked
Next steps
Confirm the matched rule shown above
Check whether the block is intentional
Fix overly broad patterns
Re-test after updating robots.txt
After fixing robots.txt, Google may need time to re-crawl the site.
You usually don’t need to request indexing immediately.