noindex, follow

The noindex, follow robots directive — delivered via a meta robots tag or X-Robots-Tag HTTP header — instructs a compliant crawler to exclude the page from its index while still following the outbound links present on the page. The noindex portion prevents the URL from appearing in search results and causes Google to eventually drop it from its index, while follow signals that crawl budget should be spent traversing links from that page to discover or recrawl other URLs. This combination is commonly used on paginated pages, thin session-variable URLs, internal search results, or administrative pages that should not surface in search but whose links — for example to product pages — should remain crawlable. It is important to note that robots.txt blocking prevents Google from seeing the tag at all, so noindex directives must not be paired with robots.txt disallows on the same URL if the intent is to remove a page from the index.