X Robots Tag is used to control more data then Robots.txt file and Robots Meta Tag. The X-Robots-Tag
 can be used as an element of the HTTP header response for a given URL. Any directive that can be used in an robots meta tag can also be specified as an X-Robots-Tag
. Here’s an example of an HTTP response with an X-Robots-Tag
 instructing crawlers not to index a page:
X-Robots-Tag: noindex
HTTP/1.1 200 OK Date: Tue, 25 May 2010 21:42:43 GMT (…) X-Robots-Tag: noindex (…)
Multiple X-Robots-Tag
headers can be combined within the HTTP response, or you can specify a comma-separated list of directives. Here’s an example of an HTTP header response which has a noarchive
X-Robots-Tag
combined with an unavailable_after
X-Robots-Tag
.
X-Robots-Tag: noarchive & unavailable_after
HTTP/1.1 200 OK Date: Tue, 25 May 2010 21:42:43 GMT (…) X-Robots-Tag: noarchive X-Robots-Tag: unavailable_after: 25 Jun 2010 15:00:00 PST (…)
The X-Robots-Tag
may optionally specify a user-agent before the directives. For instance, the following set of X-Robots-Tag
HTTP headers can be used to conditionally allow showing of a page in search results for different search engines:
X-Robots-Tag: googlebot
HTTP/1.1 200 OK Date: Tue, 25 May 2010 21:42:43 GMT (…) X-Robots-Tag: googlebot: nofollow X-Robots-Tag: otherbot: noindex, nofollow (…)
Directives specified without a user-agent are valid for all crawlers. The section below demonstrates how to handle combined directives. Both the name and the specified values are not case sensitive.
0 responses on "'X-Robots-Tag' http header"