Share this article

AI in business: the risks of confusing calculation with thinking

There's that strange moment that almost all of us have experienced recently with ChatGPT or other AI. That moment when the machine formulates an answer so accurately that we lend it a form of humanity, of intelligence. The text is so coherent, the response so quick, the wording so precise that «it» can only be intelligent. Perhaps you recognise yourself. But does the AI really understand? Before you think about trusting it with everything, here's what we should all know.

What leads us to believe that AI is intelligent?

Since the advent of generative AI models, we seem to be increasingly confusing performance with understanding. We can't see the mechanics, so we imagine a mind. Our brains, unable to tolerate emptiness, attribute intention, reasoning or meaning where there is merely a statistical sequence.

There's nothing modern about this confusion; it arises as soon as an inert object borrows our codes: it's anthropomorphism. Perceiving a soul behind movement or regularity is an archaic reflex: we see faces in clouds, silhouettes in shadows, emotions in immobile objects. This mechanism has protected us for a long time, but it continues to deceive us.

This phenomenon can also be found in military robotics. Some soldiers deployed with robot dogs develop a form of attachment to the robot. The robot moves forward, explores and returns. It repeats a stable, almost familiar behaviour. When it is destroyed, it is not just a tool that is lost: it is an imagined presence. As soon as a system works, reacts or responds, we lend it a form of intention.

ELIZA effect

The tendency to overestimate the capabilities of a conversational system and attribute to it an understanding or intelligence that it does not possess. Named after ELIZA, a programme created by Joseph Weizenbaum at MIT (1964-1966) which simulated a psychotherapist: although it merely manipulated linguistic patterns, its users confided intimate thoughts to it, convinced that it truly «understood» them. See the full definition

The rise of language models has only amplified this illusion: when an AI nuances, contextualises or rephrases, our minds naturally slide towards the idea of reasoning. When it structures an analysis, we project a thought. When it imitates empathy, we think we perceive sensitivity. Language then acts like a veil: it masks the underlying probabilistic mechanics. What we take for understanding by AI is an optimisation, what we read as an intention is just a calculation.

Data sovereignty

The outsourcing and governance of AI have become strategic challenges for organisations integrating artificial intelligence solutions into their business processes. Behind a conversational tool or an automation engine lie infrastructures, training models, data flows and technical dependencies often operated by international players.

The main providers of models and clouds are now mainly based in the United States (such as OpenAI, Microsoft and Amazon Web Services), while China is developing its own ecosystems (Alibaba Cloud, Baidu). The European Union is attempting to regulate these uses via the AI Act and the RGPD, but the operational reality remains complex: data hosted outside the EU, cascading subcontracting, models trained on uncontrolled corpora. The risk is not only legal, but also strategic and reputational.

Technological dependency, loss of sovereignty over data, difficulties in auditing models, uncertainties over the location of processing or the traceability of sources: without clear governance, AI becomes a blind spot in the information system and a strategic risk for organisations.

Regaining a clear head in the face of so-called «intelligent» technologies»

In this context, should you entrust all your projects and strategy to an AI? The question is not so much whether to use it as how to do it. The real risk would be to do it without hindsight.

We talk about «artificial intelligence», but the term is already shaping our perception. As soon as the word intelligence is mentioned, our minds spontaneously project human attributes: understanding, intention, discernment. Yet these systems have no consciousness, will or capacity for judgement. They calculate, correlate and predict. Recognising this cognitive mechanism in no way diminishes the usefulness of AI. It simply avoids attributing to it what it has never had: an intention, a responsibility, a form of mind.

The challenge is to maintain a clear-headed stance. Use AI for what it really is: an extremely powerful statistical tool, capable of amplifying analysis, speeding up production and automating certain tasks - but never of deciding in place of a manager. Digital maturity is not about delegating your judgement to the machine. It's about knowing exactly where its usefulness ends.

In a nutshell

A clear-sighted approach to artificial intelligence means accepting that, despite its name, AI is not an intelligence. It is a sophisticated statistical model, remarkable for what it allows, but devoid of subjectivity. Some of these solutions are hosted and provided by suppliers whose interests do not converge with those of the country or organisation. It is only by looking at AI for what it really is that we can decide on the place we want - or not - to give it.

Share this article