LLMs are often said to "hallucinate", "confabulate", or produce untruthful responses, which led to much work trying to mitigate such behavior. But what does it mean for an LM to hallucinate? And how can we effectively intervene in model internals to combat hallucinations?