
Blog - AI in Society
AI Does Not Make Us Dumber. It Changes How Thinking Happens at All.
By amedios editorial team in collaboration with our AI Partner
The debate began - as it so often does- with a seemingly harmless question. A recent newspaper article raised the issue of whether artificial intelligence and automation are making us “dumber.” The answer sounded almost reassuring: perhaps they are. But is that really such a problem?
This question is fundamentally misframed. Not because it lacks evidence, but because it systematically misses the point. The issue is not intelligence understood as knowledge or IQ. It is something far more profound: the gradual shift of responsibility, judgment, and cognitive autonomy from humans to systems we no longer fully understand, yet increasingly accept without scrutiny.
From Thinking to Delegating
Humans have always used tools to reduce cognitive effort. Writing replaced memory, maps replaced orientation, calculators replaced mental arithmetic. Yet all of these technologies shared one key characteristic: they supported clearly defined sub-functions of human intelligence.
AI goes further. It writes texts, evaluates options, prioritizes information, formulates arguments, makes recommendations, and simulates decisions. It does not merely take over tasks. It takes over thinking processes.
The problem is not that these results are often good. The problem is that they are good enough to no longer be questioned. This marks the beginning of a quiet transformation: thinking is no longer performed, but supervised. No longer created, but validated. Humans shift from authors to consumers of cognitive output.
Automation Trust Instead of Judgment
The more reliable systems appear, the less we question them. This phenomenon is well documented—in aviation, medicine, and finance. It is known as automation bias: the tendency to follow machine-generated recommendations even when they are contradictory or plainly wrong.
With generative AI, this effect is dramatically amplified. Not because AI is infallible, but because it is persuasive. Linguistically fluent. Structured. Confident. Errors do not appear as errors, but as plausible alternatives. The result is not collective stupidity. It is something more dangerous: the erosion of critical friction. Where uncertainty once triggered reflection, it now triggers another prompt.
The Silent Loss of Competence
What makes this process particularly insidious is that it causes little immediate discomfort. No one notices skills disappearing overnight. On the contrary: productivity increases, output grows, efficiency reports look excellent.
Yet beneath the surface, something else is happening. People gradually lose their sense of:
- when an answer is actually correct
- when a conclusion is logically sound
- when context is missing
- when a problem is framed incorrectly
This is known as deskilling: abilities do not disappear because they are forbidden, but because they are no longer required.
In highly regulated, knowledge-intensive domains, this is not an individual issue. It is a systemic risk. When expertise is no longer developed but merely simulated, what is missing in critical situations s precisely what machines cannot provide: responsibility, judgment, accountability, conscience.
AI Amplifies Power. Not Intelligence
Another aspect is frequently overlooked: AI does not distribute cognitive advantages evenly.
Those who have access to powerful models, data, compute resources, and expertise can scale thinking. Those who do not consume results. This creates a new form of dependency. Not on labor, but on judgment capacity.
AI does not automatically democratize knowledge. It centralizes decision-making power where models are controlled, trained, and governed. The result is not an enlightened knowledge society, but an asymmetrical one: a few with genuine agency, many with optimized answers—but little real capacity to act.
Education in Blind Flight
The issue becomes particularly visible in education. When AI takes over writing, summarizing, and reasoning, the question is no longer whether learners will use it. But what learning is supposed to mean at all.
If we fail to clearly define which cognitive skills humans must develop themselves, education is reduced to tool operation. Young people no longer learn to understand problems, but to formulate requests. This is not progress. It is a strategic capitulation.
The Real Risk: Disempowerment
AI does not take away our intelligence. It gradually removes the necessity to use it. The more we delegate thinking, the less we train judgment. The less we judge, the harder it becomes to take responsibility. And where responsibility disappears, systems emerge without meaningful oversight.
This affects not only individuals, but organizations, democracies, and societies as a whole. A society that outsources its thinking does not lose knowledge. It loses sovereignty.
