ai
Hallucination
In AI, hallucination refers to when a language model generates confident-sounding but factually incorrect or fabricated information. This occurs because LLMs predict statistically likely text rather than retrieving verified facts. Mitigation strategies include RAG, grounding responses in source documents, structured output validation, and using temperature settings to reduce creative deviation.
#ai