Share this article

Pourquoi l’IA peut conduire à de mauvaises décisions en entreprise

Aujourd’hui, de nombreuses organisations intègrent l’IA dans leurs décisions, leurs contenus ou leurs parcours utilisateurs. Les réponses sont rapides, structurées, souvent pertinentes. À tel point qu’une confusion s’installe : celle de considérer ces systèmes comme réellement « intelligents ».

Cette perception n’est pas anodine. Elle influence directement la manière dont les décisions sont prises, parfois en surestimant la fiabilité des résultats produits. Avant de s’appuyer sur ces outils, il est nécessaire de comprendre ce qu’ils font réellement… et ce qu’ils ne font pas.

Ce qui nous fait surestimer l’intelligence de l’IA

Since the advent of generative AI models, we seem to be increasingly confusing performance with understanding. We can't see the mechanics, so we imagine a mind. Our brains, unable to tolerate emptiness, attribute intention, reasoning or meaning where there is merely a statistical sequence.

There's nothing modern about this confusion; it arises as soon as an inert object borrows our codes: it's anthropomorphism. This phenomenon can also be found in military robotics. Some soldiers deployed with robot dogs develop a form of attachment to the robot. The robot moves forward, explores and returns. It repeats a stable, almost familiar behaviour. When it is destroyed, it is not just a tool that is lost: it is an imagined presence. As soon as a system works, reacts or responds, we lend it a form of intention.

ELIZA effect

The tendency to overestimate the capabilities of a conversational system and attribute to it an understanding or intelligence that it does not possess. Named after ELIZA, a programme created by Joseph Weizenbaum at MIT (1964-1966) which simulated a psychotherapist: although it merely manipulated linguistic patterns, its users confided intimate thoughts to it, convinced that it truly «understood» them. See the full definition

L’essor des modèles de langage n’a fait qu’amplifier cette illusion : lorsqu’une IA nuance, contextualise ou reformule, notre esprit glisse naturellement vers l’idée d’un raisonnement. Ce mécanisme cognitif n’est pas neutre : en entreprise, il peut conduire à accorder à l’IA un niveau de fiabilité ou de compréhension qu’elle ne possède pas.

Data sovereignty

The outsourcing and governance of AI have become strategic challenges for organisations integrating artificial intelligence solutions into their business processes. Behind a conversational tool or an automation engine lie infrastructures, training models, data flows and technical dependencies often operated by international players.

The main providers of models and clouds are now mainly based in the United States (such as OpenAI, Microsoft and Amazon Web Services), while China is developing its own ecosystems (Alibaba Cloud, Baidu). The European Union is attempting to regulate these uses via the AI Act and the RGPD, but the operational reality remains complex: data hosted outside the EU, cascading subcontracting, models trained on uncontrolled corpora. The risk is not only legal, but also strategic and reputational.

Dépendance technologique, perte de souveraineté sur les données, difficulté d’audit des modèles, incertitudes sur la localisation des traitements ou sur la traçabilité des sources : sans gouvernance claire, l’IA devient un angle mort du système d’information et un risque stratégique pour les organisations. Cette illusion de compréhension ne pose pas seulement un problème cognitif. Elle a aussi des conséquences très concrètes dans la manière dont les organisations intègrent ces technologies.

Regaining a clear head in the face of so-called «intelligent» technologies»

Dans ce contexte, faut-il confier l’ensemble de ses projets et de sa stratégie à une IA ? La question n’est pas tant si l’on doit l’utiliser que comment on le fait. Le véritable risque serait de le faire sans recul. Le véritable risque n’est pas d’utiliser l’IA, mais de l’utiliser sans cadre, sans recul et sans arbitrage.

We talk about «artificial intelligence», but the term is already shaping our perception. As soon as the word intelligence is mentioned, our minds spontaneously project human attributes: understanding, intention, discernment. Yet these systems have no consciousness, will or capacity for judgement. They calculate, correlate and predict. Recognising this cognitive mechanism in no way diminishes the usefulness of AI. It simply avoids attributing to it what it has never had: an intention, a responsibility, a form of mind.

The challenge is to maintain a clear-headed stance. Use AI for what it really is: an extremely powerful statistical tool, capable of amplifying analysis, speeding up production and automating certain tasks - but never of deciding in place of a manager. Digital maturity is not about delegating your judgement to the machine. It's about knowing exactly where its usefulness ends.

In a nutshell

A clear-sighted approach to artificial intelligence means accepting that, despite its name, AI is not an intelligence. It is a sophisticated statistical model, remarkable for what it allows, but devoid of subjectivity. Some of these solutions are hosted and provided by suppliers whose interests do not converge with those of the country or organisation. It is only by looking at AI for what it really is that we can decide on the place we want - or not - to give it.

Share this article