Hallucination grounding is the process of tethering the output of a large language model to verifiable, retrieved source documents so as to reduce the model’s tendency to fabricate facts, figures, or citations. Ungrounded LLMs can generate plausible-sounding but incorrect information—a failure mode commonly called hallucination—because they produce text based on statistical patterns in training data rather than by consulting a live knowledge base. Grounding techniques typically involve retrieval-augmented generation (RAG), where relevant passages are fetched from a document store or web index and included in the model’s context window before it generates a response, constraining the output to information present in the retrieved text. For SEO and AEO practitioners, hallucination grounding is relevant because well-structured, factually precise, and easily retrievable content is more likely to be used as a grounding source in AI search systems, reducing the risk that a model fabricates claims about a brand or topic while citing the wrong source.