Post by Ethan Mollick on X study hallucinations

3AAOlqrX_normal.jpg spacer.png
Ethan Mollick
⁦‪@emollick‬⁩
logo_twitter-1497383721365.png
spacer_464x1-1582829598167.png
Couple important facts about AI hallucinations from this paper:
1) Larger and more recent models hallucinate less than older & smaller models
2) LLMs hallucinate less when asked about more common information (if the topic has a Wikipedia page, for example) arxiv.org/pdf/2407.17468 pic.twitter.com/sU8xcCqpWF
8/8/24, 9:06 AM

Joseph Thornton

Leave a comment