1. Pick a preset
Start with a one-click preset for standard sites, WordPress, or to block AI crawlers.
Create, customize and download a robots.txt file for your website β with presets for AI crawlers, WordPress and more.
Add separate rules for individual crawlers (Googlebot, AI bots, etc.)
Start with a one-click preset for standard sites, WordPress, or to block AI crawlers.
Add disallowed paths, allowed paths, sitemaps, and per-bot rules for fine-grained control.
Copy the output to your clipboard or download as a ready-to-upload .txt file.
A robots.txt file is a simple text document placed in the root of your website (e.g., https://example.com/robots.txt) that tells search engine crawlers which pages or directories they should or should not access. It follows the Robots Exclusion Protocol, a standard supported by all major search engines including Google, Bing, and DuckDuckGo.
While small websites can work fine without one, having a properly configured robots.txt helps you control crawl budget, prevent indexing of admin panels or staging areas, and β increasingly β block AI training crawlers like GPTBot, CCBot, and Google-Extended from scraping your content.
Use our generator to build the file visually, then download or copy it. Upload it to the root directory of your domain so it's accessible at yourdomain.com/robots.txt. After uploading, you can verify it works correctly using Google Search Console's robots.txt Tester (under the Legacy tools section).
Remember that robots.txt only advises crawlers β it does not enforce access control. If you need to prevent a page from appearing in search results entirely, combine robots.txt with a noindex meta robots tag on the page itself.
Blocking CSS and JS files: Google needs access to your stylesheets and scripts to render pages properly. Blocking them can hurt your rankings. Forgetting the trailing slash: Disallow: /admin blocks both /admin and /admin-page, whereas Disallow: /admin/ only blocks paths inside that directory. Using robots.txt for security: Disallowed URLs can still appear in search results if other pages link to them. Use proper authentication for sensitive pages.
A robots.txt file is a plain text file placed at the root of your domain that tells search engine crawlers and web robots which pages or directories they can or cannot access. It follows the Robots Exclusion Protocol.
It must be at the root of your domain, accessible at https://yourdomain.com/robots.txt. It will not work if placed inside a subdirectory.
Add separate User-agent blocks for AI bots like GPTBot, ChatGPT-User, Google-Extended, CCBot, and ClaudeBot, each followed by Disallow: /. Our generator includes a one-click "Block AI crawlers" preset for this.
robots.txt prevents crawlers from accessing pages entirely, while <meta name="robots"> tags control whether pages appear in search results. Use both together for full control.
Search engines will crawl and attempt to index all accessible pages on your site. This is usually fine for small sites, but larger sites benefit from controlling their crawl budget.
No. It only instructs bots. If you need to prevent human access, use password protection or server-side authentication.