Free Robots.txt Generator

Create a perfect robots.txt file for your website in seconds. Control how search engines and AI bots crawl your site with our easy-to-use generator.

No signup required100% FreeAI Bot Blocking

Quick Presets

AI Bot Blocking

CMS Presets

Generated robots.txt

User-agent: *
Allow: /

Save this file as robots.txt in the root directory of your website.

Quick Tips

  • • Use * as user-agent to apply rules to all bots
  • / means the entire site
  • • Add your sitemap URL for better indexing
  • • Test your rules before deploying

Need More Powerful SEO Tools?

SEOZilla.ai offers a complete suite of AI-powered SEO tools including keyword research, content optimization, site audits, and more.

What is a robots.txt File?

A robots.txt file is a simple text file that tells search engine crawlers and other web robots which pages or sections of your website they should or should not visit. It's placed in the root directory of your website (e.g., example.com/robots.txt) and is one of the first files bots look for when visiting a site.

Why is robots.txt Important?

  • Control Crawling: Prevent search engines from indexing private areas, duplicate content, or resource-heavy pages
  • Save Crawl Budget: Direct crawlers to your most important pages, especially for large websites
  • Block AI Crawlers: Prevent AI companies like OpenAI, Anthropic, and others from using your content for training
  • Protect Sensitive Areas: Keep admin pages, staging environments, and internal tools out of search results

How Does robots.txt Work?

The robots.txt file uses simple directives to communicate with web crawlers:

  • User-agent: Specifies which crawler the rules apply to
  • Allow: Permits access to specific paths
  • Disallow: Blocks access to specific paths
  • Sitemap: Points crawlers to your XML sitemap
  • Crawl-delay: Requests a delay between requests (not all bots honor this)

Frequently Asked Questions

Where should I place my robots.txt file?

Your robots.txt file must be placed in the root directory of your domain. For example, if your website is example.com, the file should be accessible at example.com/robots.txt. It won't work if placed in subdirectories.

Does robots.txt guarantee pages won't be indexed?

No, robots.txt is a directive, not a guarantee. While most major search engines respect robots.txt, it doesn't prevent pages from appearing in search results if other sites link to them. For complete blocking, use the 'noindex' meta tag or HTTP header.

How do I block AI crawlers like ChatGPT and Claude?

Use our 'Block All AI Bots' preset to block known AI crawlers including GPTBot (ChatGPT), anthropic-ai (Claude), CCBot (Common Crawl), and others. These crawlers generally respect robots.txt directives.

What's the difference between Allow and Disallow?

'Disallow' prevents crawlers from accessing specified paths, while 'Allow' explicitly permits access. Allow is useful when you want to block a directory but permit access to specific files within it. Allow rules take precedence when both match the same path length.

Should I include my sitemap in robots.txt?

Yes! Including your sitemap URL in robots.txt helps search engines discover and crawl your important pages more efficiently. It's a best practice for SEO and ensures crawlers can find your sitemap even if they don't discover it through other means.

How often do search engines check robots.txt?

Major search engines typically cache robots.txt and refresh it periodically. Google usually re-fetches it within a few hours to a day. If you make important changes, you can request a re-crawl through Google Search Console.

Ready to Boost Your SEO?

The robots.txt generator is just the beginning. SEOZilla.ai offers a complete suite of SEO tools to help your website rank higher.

Try SEOZilla.ai Free