Published: Written in Crawley, UK
Original post can be viewed

The LLMentalist Effect: How chat-based Large Language Models replicate the mechanisms of a psychic’s con — written by Baldur Bjarnason

Baldur Bjarnason has quite a bit to say on this subject:

Our current environment of relentless hype sets the stage and builds up an expectation for at least glimmers of genuine intelligence. For all the warnings vendors make about these systems not being general intelligences, those statements are always followed by either an implied or an actual “yet”. The hype strongly implies that these are “almost” intelligences and that you should be able to perceive “sparks” of intelligence in them.

Those who believe are primed for subjective validation.

Falling for this statistical illusion is easy. It has nothing to do with your intelligence or even your gullibility. It’s your brain working against you. Most of the time conversations are collaborative and personal, so your mind is optimised for finding meaning in what is said under those circumstances. If you also want to believe, whether it’s in psychics or in AGI, your mind will helpfully find reasons to believe in the conversation you’re having.

Taken together, these flaws make LLMs look less like an information technology and more like a modern mechanisation of the psychic hotline.

Delegating your decision-making, ranking, assessment, strategising, analysis, or any other form of reasoning to a chatbot becomes the functional equivalent to phoning a psychic for advice.

Imagine Google or a major tech company trying to fix their search engine by adding a psychic hotline to their front page? That’s what they’re doing with Bard.


Other bookmarks