X-Robots-Tag Generator
Generate an X-Robots-Tag HTTP header for SEO crawler control.
Back to all tools on ToolForge
About X-Robots-Tag Generator
This X-Robots-Tag generator creates HTTP header directives for controlling search engine crawler behavior. It supports common directives including noindex, nofollow, noarchive, and nosnippet that can be combined into a single header value.
It is useful for PDFs, images, downloads, and other non-HTML files where you need robot directives at the HTTP header level instead of HTML meta tags. X-Robots-Tag is the only way to control indexing for non-HTML content.
X-Robots-Tag Directives
| Directive | Description | Use Case |
|---|---|---|
| noindex | Exclude from search results | Private documents, internal PDFs |
| nofollow | Don't follow links in content | Pages with untrusted outbound links |
| noarchive | Don't show cached version | Time-sensitive content, pricing pages |
| nosnippet | Don't show description preview | Content you don't want previewed |
| notranslate | Don't offer translation | Content that loses meaning when translated |
| unavailable_after | Remove after specific date | Event pages, limited-time offers |
Setting X-Robots-Tag on Different Servers
Header set X-Robots-Tag "noindex, nofollow"
location ~* \.(pdf|jpg|png)$ {
add_header X-Robots-Tag "noindex";
}
<?php
header('X-Robots-Tag: noindex, nofollow');
?>
res.setHeader('X-Robots-Tag', 'noindex, nofollow');
response.headers['X-Robots-Tag'] = 'noindex, nofollow'
X-Robots-Tag vs Meta Robots vs Robots.txt
| Method | Works For | Limitations |
|---|---|---|
| X-Robots-Tag | All file types | Requires server configuration |
| Meta Robots | HTML pages only | Cannot be used for PDFs, images |
| Robots.txt | All crawlable URLs | Blocks access, doesn't control indexing |
Common Use Cases
- PDF documents: Prevent internal documents, whitepapers, or drafts from appearing in search
- Image files: Control indexing of sensitive or private images
- Download files: Prevent software, datasets, or media files from being indexed
- API responses: Control how API endpoint URLs appear in search results
- Staging environments: Block all indexing on non-production servers
- Time-limited content: Use unavailable_after for event pages or promotions
Directive Combinations
X-Robots-Tag: noindex, nofollow, noarchive X-Robots-Tag: googlebot: noindex X-Robots-Tag: bingbot: nofollow X-Robots-Tag: noindex, nofollow X-Robots-Tag: googlebot: nosnippet
Testing and Validation
To verify X-Robots-Tag headers are being sent correctly:
- curl:
curl -I https://example.com/file.pdfto see response headers - Browser DevTools: Check Network tab for X-Robots-Tag in response headers
- Google Search Console: URL Inspection tool shows indexing status
- Online header checkers: Various tools display HTTP response headers
Important Considerations
- X-Robots-Tag must be sent in the HTTP response, not set in HTML
- Directives are case-insensitive (noindex = NoIndex)
- Multiple values should be comma-separated
- Google supports all major directives; other engines may vary
- For complete removal from search, combine with robots.txt blocking after deindexing
Frequently Asked Questions
- What is the X-Robots-Tag HTTP header?
- The X-Robots-Tag is an HTTP response header that provides search engine crawlers with indexing and crawling directives. Unlike meta robots tags that work only for HTML pages, X-Robots-Tag works for any file type including PDFs, images, videos, and downloads.
- What is the difference between X-Robots-Tag and meta robots?
- Meta robots tags (<meta name="robots">) only work in HTML documents. X-Robots-Tag is an HTTP header that works for any content type. Use X-Robots-Tag for PDFs, images, and other non-HTML files where you cannot embed meta tags.
- What directives does X-Robots-Tag support?
- Common directives include: noindex (don't show in search results), nofollow (don't follow links), noarchive (don't show cached version), nosnippet (don't show description preview), notranslate (don't offer translation), and unavailable_after (remove after specific date).
- How do I set the X-Robots-Tag header on my server?
- Apache: Header set X-Robots-Tag "noindex, nofollow" in .htaccess. Nginx: add_header X-Robots-Tag "noindex, nofollow"; in server block. PHP: header('X-Robots-Tag: noindex, nofollow'). For PDFs and static files, configure your web server or use a CDN to add the header.
- When should I use X-Robots-Tag instead of robots.txt?
- Use robots.txt to block crawler access entirely. Use X-Robots-Tag when you want crawlers to access content but control how it appears in search results. X-Robots-Tag is more granular and allows indexing while controlling specific behaviors like snippets or caching.
- Can I use X-Robots-Tag for specific search engines?
- Yes, you can target specific crawlers by using their user-agent names: X-Robots-Tag: googlebot: noindex or X-Robots-Tag: bingbot: nofollow. However, not all search engines support user-agent-specific directives in HTTP headers.