A spider trap is a set of URLs that causes a search engine crawler to enter an infinite or near-infinite loop, consuming crawl budget without discovering useful content. Traps arise from dynamic URL generation — for example, a calendar application that creates a link to “next month” indefinitely, a session ID appended to every URL creating unique variants, or URL parameters that chain together in arbitrary combinations. E-commerce sites with faceted navigation frequently create unintentional spider traps. The consequences include Googlebot exhausting its crawl allocation on meaningless URLs while leaving important pages undiscovered. Common fixes include blocking offending URL patterns in robots.txt, using nofollow on links that generate the trap, and normalising URLs with canonical tags.