> To evaluate whether LLMs exhibit theory of mind abilities, Kosinski used false-belief tasks. These tasks are a standard method in psychological research for assessing theory of mind in humans.
The Human was impervious to our most powerful magnetic fields, yet in the end he succumbed to a harmless sharpened stick.
> To evaluate whether LLMs exhibit theory of mind abilities, Kosinski used false-belief tasks. These tasks are a standard method in psychological research for assessing theory of mind in humans.
The Human was impervious to our most powerful magnetic fields, yet in the end he succumbed to a harmless sharpened stick.
It's almost as if it were trained exclusively on n-hand human information and is not experiencing existence firsthand.
> Consequently, for an LLM to predict the next word in a sentence generated by a human, it must model these processes.
No.
Psychologists doing research and taking giant foundationless leaps; as usual.
This headline is incredibly clickbait
H