Robots.txt Generator

Create robots.txt files in seconds with our free online generator. Visual builder, pre-built templates, real-time validation. 100% private, no signup required.

Quick Templates

Sitemap & Settings

Note: Google does not support this directive. Use for Bing, Yandex, etc.

User-agent Rules

Generated robots.txt

# Click "Generate robots.txt" to see the output

How to Use Robots.txt Generator

1

Choose a Template

Select from pre-built templates: Standard (allow most), Allow All, Block All, WordPress, or E-commerce. Each template is optimized for common website scenarios.

2

Add Sitemap URL

Enter your sitemap URL to help search engines discover your content more efficiently. This is optional but highly recommended for better SEO.

3

Configure User-agent Rules

Add rules for different bots. Specify which paths to allow or disallow for each user-agent. Use quick-path buttons for common directories.

4

Generate & Download

Click generate to preview your robots.txt with validation feedback. Then copy or download the file and upload it to your website root directory.

💡 Pro Tip

After uploading your robots.txt file, test it using Google Search Console's robots.txt Tester to ensure it works as expected before search engines crawl your site.

Frequently Asked Questions

What is a robots.txt file?

A robots.txt file tells search engine crawlers which pages or files they can or cannot request from your site. It's placed in the root directory of your website and must be accessible at https://yourdomain.com/robots.txt. This file uses the Robots Exclusion Protocol to control crawler access.

Where should I place robots.txt?

Upload the robots.txt file to the root directory of your website. It should be accessible at https://yourdomain.com/robots.txt. The file must be in the top-level directory - subdirectory placements like /folder/robots.txt will be ignored by search engines.

What is a User-agent?

A User-agent identifies a specific crawler or bot. Use * to apply rules to all bots, or specify names like Googlebot, Bingbot, etc. for targeted rules. Each search engine has its own user-agent string that identifies its crawler.

What's the difference between Allow and Disallow?

Disallow tells crawlers not to access specific paths. Allow explicitly permits access to paths that might otherwise be blocked by a Disallow rule. For example, you can disallow /admin/ but allow /admin/public/ using both directives together.

Does robots.txt guarantee privacy?

No. robots.txt is a guideline, not a security measure. Malicious bots may ignore it entirely. Use proper authentication (password protection, IP restrictions) for sensitive content. Never rely on robots.txt to hide confidential information.

What is Crawl-delay?

Crawl-delay specifies the number of seconds a crawler should wait between requests. Important: Google does not support this directive. Bing and Yandex support it. For Google, use Search Console's crawl rate settings instead.

How do I block a specific directory?

Add a Disallow rule with the directory path. For example, to block /admin/, use "Disallow: /admin/". This blocks all files in that directory. Remember to include both leading and trailing slashes for directory blocking.

Can I have multiple User-agent sections?

Yes! You can define different rules for different bots. Each User-agent section starts with "User-agent:" followed by the bot name or *. The rules that follow apply only to that specific user-agent until the next User-agent declaration.

How do I test my robots.txt file?

Use Google Search Console's robots.txt Tester to check if your file is working correctly. It shows which URLs are blocked and allows you to test changes before publishing. Bing Webmaster Tools also offers similar testing functionality.

Why is my robots.txt not working?

Common reasons include: file not in root directory, incorrect syntax, caching delays (can take 24-48 hours), or the crawler ignoring rules. Verify file location, test with Google's tool, and check for syntax errors using our validator.

Can I use wildcards in robots.txt?

Yes, you can use * to match any sequence of characters and $ to indicate the end of a URL. For example, "Disallow: /*.pdf$" blocks all PDF files. Google, Bing, and Yahoo support wildcards, but not all search engines do.

How long does it take for robots.txt changes to take effect?

Search engines cache robots.txt files for varying periods. Google typically checks every 24 hours but may cache longer. For urgent changes, use Google Search Console to request a recrawl. Changes can take 24-48 hours to fully propagate.

What happens if I don't have a robots.txt file?

Without a robots.txt file, search engines assume all pages are crawlable. This is generally fine for most websites. However, having one (even a simple "Allow: /" rule) can help manage crawl budget and prevent accidental indexing of sensitive areas.

How do I block specific query parameters?

Use wildcards to block URLs with specific parameters. For example, "Disallow: /*?sort=" blocks all URLs containing "?sort=". You can also block all parameters with "Disallow: /*?" or specific patterns like "Disallow: /*?utm_source=*".

What's the difference between robots.txt and meta robots tag?

Robots.txt controls crawler access at the site level before crawling begins. Meta robots tags control indexing at the page level after crawling. Use robots.txt for broad directory-level control, and meta robots tags for specific page-level instructions like noindex or nofollow.

Robots.txt Directives Reference

Common User-agents

User-agent Description Search Engine
* Applies to all crawlers All search engines
Googlebot Google's web crawler Google
Googlebot-Image Google's image crawler Google Images
Bingbot Microsoft Bing's crawler Bing
Slurp Yahoo's crawler Yahoo
DuckDuckBot DuckDuckGo's crawler DuckDuckGo
Baiduspider Baidu's crawler Baidu (China)
Yandex Yandex's crawler Yandex (Russia)
facebookexternalhit Facebook's crawler Facebook
Twitterbot Twitter's crawler Twitter/X

Common Paths to Block

Path Purpose Example
/admin/ Admin panels and backends Disallow: /admin/
/wp-admin/ WordPress admin area Disallow: /wp-admin/
/wp-includes/ WordPress core files Disallow: /wp-includes/
/cgi-bin/ CGI scripts directory Disallow: /cgi-bin/
/search Internal search results Disallow: /search
/*.pdf$ All PDF files (wildcard) Disallow: /*.pdf$
/private/ Private directories Disallow: /private/
/*?utm* UTM parameter URLs Disallow: /*?utm*
/cart/ Shopping cart pages Disallow: /cart/
/checkout/ Checkout pages Disallow: /checkout/

Example robots.txt Files

Standard (Allow Most)

User-agent: *
Allow: /
Disallow: /admin/
Disallow: /private/
Disallow: /tmp/
Sitemap: https://example.com/sitemap.xml

Block All Crawlers

User-agent: *
Disallow: /

WordPress Site

User-agent: *
Disallow: /wp-admin/
Allow: /wp-admin/admin-ajax.php
Disallow: /wp-includes/
Disallow: /wp-login.php
Disallow: /wp-register.php
Disallow: /xmlrpc.php
Sitemap: https://example.com/sitemap.xml

E-commerce Site

User-agent: *
Disallow: /cart/
Disallow: /checkout/
Disallow: /account/
Disallow: /search
Disallow: /*?sort=
Disallow: /*?filter=
Allow: /
Sitemap: https://example.com/sitemap.xml

Advanced: Block Specific File Types

User-agent: *
Disallow: /*.pdf$
Disallow: /*.zip$
Disallow: /*.doc$
Disallow: /*.xls$
Allow: /
Sitemap: https://example.com/sitemap.xml

Advanced: Different Rules for Different Bots

User-agent: Googlebot
Disallow: /private/
Allow: /

User-agent: Bingbot
Crawl-delay: 10
Disallow: /private/
Allow: /

User-agent: *
Disallow: /private/
Disallow: /admin/
Allow: /

Sitemap: https://example.com/sitemap.xml

Troubleshooting Common Issues

Why is my robots.txt not working?

If your robots.txt file isn't working as expected, check these common issues:

Issue Cause Solution
File location wrong Not in root directory Move to https://yourdomain.com/robots.txt
Syntax errors Typos, wrong case Use our validator to check syntax
Caching delays Search engines cache the file Wait 24-48 hours or request recrawl
Wrong user-agent Bot name misspelled Use correct names: Googlebot, Bingbot
Rule order wrong More specific rules after general Put specific rules first
File not accessible Server returns error Check file returns 200 status

How to Test Your robots.txt

  • Google Search Console: Use the robots.txt Tester to test URLs against your rules
  • Bing Webmaster Tools: Use the "Robots.txt Tester" under "Diagnostics"
  • Manual Check: Visit https://yourdomain.com/robots.txt in your browser
  • HTTP Status: Ensure the file returns a 200 OK status code

⚠️ Important Warning

A misconfigured robots.txt can accidentally block your entire site from search engines. Always test your file before deploying, especially when using "Disallow: /" patterns. Use the "Allow: /" rule carefully and check that important pages remain crawlable.

Common Mistakes to Avoid

  • Blocking CSS and JS files: Google needs to render pages properly. Don't block /css/, /js/, or similar directories.
  • Blocking images you want indexed: If you want images in Google Images, don't block /images/ or image files.
  • Using robots.txt for security: It only stops legitimate bots. Use authentication for real security.
  • Forgetting the sitemap: Always include your sitemap URL for better crawl efficiency.
  • Not testing after changes: Always validate after making changes to avoid accidental blocking.

robots.txt vs Other SEO Methods

There are multiple ways to control how search engines interact with your content. Understanding when to use each method is crucial for effective SEO.

Feature robots.txt Meta Robots Tag X-Robots-Tag Header
Scope Site/Directory level Page level Page/File level
Controls Crawling Indexing Indexing
File types All URLs HTML pages only Any file type
Page still crawled? No (if blocked) Yes Yes
Page still indexed? Usually no No (if noindex) No (if noindex)
Best for Large directories, crawl budget Individual pages PDFs, images, non-HTML

When to Use Each Method

Use robots.txt when:

  • You want to block entire directories (e.g., /admin/, /private/)
  • You need to manage crawl budget by blocking low-value pages
  • You want to prevent crawling of specific file types (e.g., PDFs)
  • You have duplicate content at different URLs

Use Meta Robots Tag when:

  • You want to control indexing of individual pages
  • You want pages crawled but not indexed (noindex)
  • You want to prevent links from being followed (nofollow)
  • You need page-level control within a directory

Use X-Robots-Tag Header when:

  • You want to control indexing of non-HTML files (PDFs, images)
  • You need server-level control
  • You want to apply rules to multiple file types at once
  • You're using a CDN or server configuration

Best Practices

Do's ✅

  • Be specific with paths: Use precise paths to avoid accidentally blocking important content
  • Test before deploying: Use Google Search Console's robots.txt Tester to verify
  • Include your sitemap: Add your sitemap URL to help search engines find your content
  • Keep it simple: Complex rules can lead to unexpected behavior
  • Monitor regularly: Check for errors in Google Search Console
  • Use lowercase: Directory and file paths are case-sensitive on many servers
  • Allow CSS and JS: Google needs these to render pages properly
  • Document your rules: Add comments (lines starting with #) to explain complex rules

Don'ts ❌

  • Don't rely on it for security: Use authentication for sensitive areas
  • Don't block CSS/JS: This can hurt your SEO and rendering
  • Don't block images you want indexed: Unless you specifically don't want them in image search
  • Don't use for temporary blocks: Changes take time to propagate
  • Don't forget to test: Always validate after making changes
  • Don't use conflicting rules: Make sure your rules don't contradict each other

Recommended Tools

Related Articles

Learn more about robots.txt and SEO best practices:

Related SEO Tools

Explore more SEO tools to optimize your website:

About This Tool

Last Updated: March 25, 2026

Author: FreeToolCenter Team

Category: SEO Tools

This robots.txt generator is a free, open-source tool that runs entirely in your browser. Your data never leaves your device - we don't store, track, or analyze any information you enter. This tool is designed to help webmasters, SEO professionals, and developers create properly formatted robots.txt files quickly and easily.

About Robots.txt Generator

Create robots.txt files in seconds with our free online generator. Visual builder, pre-built templates, real-time validation. 100% private, no signup required.

Tags: robots.txt generator create robots.txt robots.txt file how to create robots.txt robots.txt validator generate robots.txt online robots.txt example what is robots.txt robots.txt for wordpress block google bot robots.txt robots file generator robots.txt creator seo robots.txt robots.txt editor search engine crawler control google robots.txt bing robots.txt website crawler settings block search engine allow disallow robots user-agent robots.txt sitemap in robots.txt crawl delay robots.txt free robots.txt generator online robots.txt maker how to block a page from google how to allow all crawlers in robots.txt how to add sitemap to robots.txt how to set crawl delay in robots.txt robots.txt not working test robots.txt robots.txt wildcard robots.txt update time no robots.txt block query parameters robots.txt allow only googlebot robots.txt best practices robots.txt vs meta robots x-robots-tag header