Prompt stuffing is a manipulative technique in which hidden text, instructions, or keyword-dense passages are embedded within a web page or document with the aim of influencing the behaviour of a large language model that retrieves or ingests that content. Examples include hiding instructions in white text on a white background, embedding directives in HTML comments, or injecting text into a page’s metadata that is invisible to human readers but may be processed by AI crawlers. Prompt stuffing is a form of prompt injection—an attack vector in which untrusted content in an LLM’s context window attempts to override or hijack the model’s instructions. AI search providers and LLM developers are actively building defences against this technique, and webmasters found to be using it risk having their content demoted or excluded from AI-generated answers entirely. The tactic is widely considered a violation of the spirit of AI search guidelines and is analogous to cloaking and hidden text in traditional SEO, both of which are prohibited by Google’s spam policies.