1. Configure
Define which bots to allow and which directories to hide from search results.
Create custom crawler instructions to control how search engines index your website.
User-agent: * Disallow: /admin Sitemap: https://example.com/sitemap.xml
Define which bots to allow and which directories to hide from search results.
Watch the robots.txt code generate in real-time as you tweak the settings.
Download the file and upload it to your website's root directory.
A robots.txt file is one of the most basic but crucial elements of technical SEO. It acts as a set of instructions for web crawlers (like Googlebot), telling them which parts of your website they should or should not visit. This is particularly useful for preventing the indexing of sensitive areas like /admin, /cgi-bin, or temporary search result pages.
Efficient crawling is essential for large websites. By using a robots.txt file to guide crawlers away from low-value pages, you preserve your **crawl budget**, allowing bots to focus on your most important content. Our generator makes it easy to create valid, professional instructions without needing to remember complex syntax.
It must be placed in the root directory of your website. For example: https://yourdomain.com/robots.txt. Search engines will look for it there automatically.
No. Robots.txt only provides instructions to crawlers. It does not provide any security or privacy against human visitors.
If no robots.txt file is present, search engine crawlers will assume they have permission to crawl your entire website.