@[email protected] to [email protected]English • 1 year agoWe have to stop ignoring AI’s hallucination problemwww.theverge.comexternal-linkmessage-square207fedilinkarrow-up1533
arrow-up1533external-linkWe have to stop ignoring AI’s hallucination problemwww.theverge.com@[email protected] to [email protected]English • 1 year agomessage-square207fedilink
minus-square@[email protected]linkfedilinkEnglish14•1 year agoThat’s an excellent methaphor for LLMs.
minus-square@[email protected]linkfedilinkEnglish19•1 year agoIt’s the Chinese room thought experiment.
minus-square@[email protected]linkfedilinkEnglish5•1 year agoHadn’t heard about it before (or maybe I did but never looked into it), so I just went and found it in Wikipedia and will be reading all about it. So thanks for the info!
minus-square@[email protected]linkfedilinkEnglish6•1 year agoNo worries. The person above did a good job explaining it although they kind of mashed it together with the imagery from Plato’s allegory of the cave.
That’s an excellent methaphor for LLMs.
It’s the Chinese room thought experiment.
Hadn’t heard about it before (or maybe I did but never looked into it), so I just went and found it in Wikipedia and will be reading all about it.
So thanks for the info!
No worries. The person above did a good job explaining it although they kind of mashed it together with the imagery from Plato’s allegory of the cave.