-
Chain-of-Verification Reduces Hallucination in Large Language Models
Paper • 2309.11495 • Published • 38 -
Hallucination Detox: Sensitive Neuron Dropout (SeND) for Large Language Model Training
Paper • 2410.15460 • Published • 1 -
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations
Paper • 2410.18860 • Published • 9 -
Do I Know This Entity? Knowledge Awareness and Hallucinations in Language Models
Paper • 2411.14257 • Published • 13
Collections
Discover the best community collections!
Collections including paper arxiv:2411.14257