@[email protected] to [email protected] • 9 months agoMajor shifts at OpenAI spark skepticism about impending AGI timelinesarstechnica.comexternal-linkmessage-square60fedilinkarrow-up1138
arrow-up1138external-linkMajor shifts at OpenAI spark skepticism about impending AGI timelinesarstechnica.com@[email protected] to [email protected] • 9 months agomessage-square60fedilink
minus-square@[email protected]linkfedilinkEnglish1•9 months agoNo it just needs to categorise into important / probably true and not important / probably nonsense, as a first step Here are Johnny harris’s words describing what I am talking about (he describes it in order to able to talk about lies better) https://youtu.be/yWgG3Mgn2Gc?si=bPcYhRAZNaY2qIJS
minus-squareMentalEdgelinkfedilinkEnglish1•edit-29 months agoRight… As if critical thinking is super easy, basic stuff, that humans get right every time without even trying. You actually think getting a computer to do it would be easier than making the AGI? You are VERY confused about how thinking works.
No it just needs to categorise into important / probably true and not important / probably nonsense, as a first step
Here are Johnny harris’s words describing what I am talking about (he describes it in order to able to talk about lies better)
https://youtu.be/yWgG3Mgn2Gc?si=bPcYhRAZNaY2qIJS
Right…
As if critical thinking is super easy, basic stuff, that humans get right every time without even trying. You actually think getting a computer to do it would be easier than making the AGI?
You are VERY confused about how thinking works.