Crawlability

Crawlability is the ease with which search engine bots can discover and access a site’s pages. A page is crawlable if Googlebot can reach it via a followed link or sitemap, the URL is not blocked by robots.txt, and the server responds in a timely way with a readable response. Common crawlability problems include disallow rules that block important sections, orphaned pages with no incoming links, excessive URL parameters that create infinite crawl paths (a spider trap), login walls, and unreliable server performance. Crawlability is a prerequisite for indexation: a page that cannot be crawled will not appear in search results. It is closely related to but distinct from indexability, which concerns whether a crawled page is accepted into the index.