Share this article

The great misunderstanding of AI: why we think it's intelligent

There's that strange moment that almost all of us have experienced recently with ChatGPT or other AI. That moment when the machine formulates an answer so accurately that we lend it a form of humanity, of intelligence. The text is so coherent, the response so rapid, the formulation so fine-tuned that «it» can only be intelligent.

Since the advent of generative models, we seem to be increasingly confusing performance with understanding. We can't see the mechanics, so we imagine a mind. Or perhaps it's simply our brains, unable to tolerate a vacuum, that attribute intention, reasoning or meaning where there is merely a statistical sequence?

An age-old human reflex

There's nothing modern about this confusion. It arises as soon as an inert object borrows our codes. Perceiving a soul behind movement or regularity is an archaic reflex: we see faces in clouds, silhouettes in shadows, emotions in immobile objects. This mechanism has protected us for a long time, but it continues to deceive us.

We find this phenomenon in an equally unexpected area: military robotics. Some soldiers deployed with robot dogs develop a form of attachment to the robot. The robot moves forward, explores and returns. It repeats a stable, almost familiar behaviour. When it is destroyed, it is not just a tool that is lost: it is an imagined presence. So as soon as a system works, reacts or responds, we lend it a form of intention.

The rise of language models has only accelerated this illusion: when an AI nuances, contextualises or rephrases, our minds naturally slide towards the idea of reasoning. When it structures an analysis, we project a thought. When it imitates empathy, we think we perceive sensitivity. Language then acts like a veil: it masks the underlying probabilistic mechanics. What we take to be an understanding is an optimisation. What we read as an intention is merely a calculation.

Regaining a clear head in the face of the technologies we call «intelligent».»

This is perhaps the first misunderstanding: we call a technology that has no intelligence «intelligence». From the moment we use the word, our minds take it on board. This shift is less about technology than about our age: an age that sometimes confuses speed with wisdom, automation with understanding, performance with thought. Acknowledging this mechanism does nothing to diminish the usefulness of AI, it simply prevents us from lending it something it has never had: an intention, a conscience, a form of mind.

Lucidity begins here: accepting that, despite its name, AI is not an intelligence. It is a sophisticated statistical model, remarkable for what it enables, but devoid of subjectivity. And it's only by looking at it for what it really is that we can decide what place we want - or don't want - to give it.

Share this article