← Glossary

What is X-Robots-Tag?

X-Robots-Tag is an HTTP header that tells search engines how to index a page (e.g. noindex, nofollow). Learn how it differs from meta robots and when to use it.

The X-Robots-Tag is an HTTP response header that instructs search engine crawlers how to handle the page—for example, whether to index it or follow links. It has the same effect as the <meta name="robots" content="..."> tag but is sent by the server and applies to the whole response (including non-HTML if the crawler respects it).

Common directives

  • noindex — Don’t add this page to the search index.
  • nofollow — Don’t use links on this page for discovery or link equity (depending on crawler).
  • noarchive — Don’t show a cached copy.
  • nosnippet — Don’t show a snippet in results.
  • none — Shorthand for noindex, nofollow.

You can combine them (e.g. X-Robots-Tag: noindex, nofollow) and use multiple headers if needed.

When to use it

  • Dynamic or sensitive content — When you can’t or don’t want to put meta tags in HTML (e.g. PDFs, certain APIs).
  • Site-wide rules — Set at server level so every response gets the same directive.
  • Conditional logic — Server can send different X-Robots-Tag by URL or user-agent.

X-Robots-Tag vs meta robots

  • Meta robots — In the HTML <head>; only applies to that document and only when the crawler parses HTML.
  • X-Robots-Tag — In the HTTP response; applies before HTML is parsed and can apply to non-HTML resources.

How BearAudit checks it

BearAudit reads the response headers for each crawled page and reports the presence and value of X-Robots-Tag. We surface noindex, nofollow, and other directives so you can see which pages are set to be excluded or limited in indexing and fix them if needed.

More in the glossary

View all glossary entries