Robots.txt Tester

Fetch or paste robots.txt rules, test any URL against a chosen crawler, and see exactly which directive allows or blocks the path.

SEO ToolsBlocks: 3Testing: Googlebot

Fetch robots.txt from URL

Robots.txt Content

Test Parameters

Allowed

Googlebot can crawl this URL.

allow: /
Path: /admin/users
Parsed Rules
User-agent: *
allow/
disallow/admin/
disallow/private/
disallow/wp-login.php
disallow/search?
disallow/*.pdf$
User-agent: AhrefsBot
disallow/
User-agent: Googlebot
allow/
disallow/staging/
crawl-delay2
sitemaphttps://example.com/sitemap.xml

Rule Priority

The longer and more specific path wins. If Allow and Disallow match equally, Allow typically takes priority.

Not Security

robots.txt only guides compliant bots. It is not access control and should never protect private or sensitive URLs.

Root Placement

Place robots.txt at the root of the domain, for example https://example.com/robots.txt. Subdirectory robots files are ignored.

Technologies

Our Tech Stack

FigmaFigma
React.jsReact.js
Next.jsNext.js
TypeScriptTypeScript
ShopifyShopify
WordPressWordPress
HTML5HTML5
CSS3CSS3
TailwindTailwind
FramerFramer
FigmaFigma
React.jsReact.js
Next.jsNext.js
TypeScriptTypeScript
ShopifyShopify
WordPressWordPress
HTML5HTML5
CSS3CSS3
TailwindTailwind
FramerFramer
FigmaFigma
React.jsReact.js
Next.jsNext.js
TypeScriptTypeScript
ShopifyShopify
WordPressWordPress
HTML5HTML5
CSS3CSS3
TailwindTailwind
FramerFramer
PHPPHP
GitGit
MySQLMySQL
AWS/VercelAWS/Vercel
IllustratorIllustrator
PhotoshopPhotoshop
SassSass
Node.jsNode.js
StripeStripe
GraphQLGraphQL
PHPPHP
GitGit
MySQLMySQL
AWS/VercelAWS/Vercel
IllustratorIllustrator
PhotoshopPhotoshop
SassSass
Node.jsNode.js
StripeStripe
GraphQLGraphQL
PHPPHP
GitGit
MySQLMySQL
AWS/VercelAWS/Vercel
IllustratorIllustrator
PhotoshopPhotoshop
SassSass
Node.jsNode.js
StripeStripe
GraphQLGraphQL