@[email protected] to [email protected]English • 1 year agoWe have to stop ignoring AI’s hallucination problemwww.theverge.comexternal-linkmessage-square207fedilinkarrow-up1533
arrow-up1533external-linkWe have to stop ignoring AI’s hallucination problemwww.theverge.com@[email protected] to [email protected]English • 1 year agomessage-square207fedilink
minus-square@[email protected]linkfedilinkEnglish2•edit-21 year agoThey are right though. LLM at their core are just about determining what is statistically the most probable to spit out.
minus-square@[email protected]linkfedilinkEnglish1•1 year agoYour 1 sentence makes more sense than the slop above.
They are right though. LLM at their core are just about determining what is statistically the most probable to spit out.
Your 1 sentence makes more sense than the slop above.