top of page

AI, reasoning, and the illusion of morality

  • F.
  • Nov 18
  • 7 min read

Updated: Nov 21

A reflection on thought, technology, and human responsibility


Note: This piece was written in collaboration with an AI. The reflections, arguments, and tone are entirely my own — but the act of writing emerged through dialogue. I wanted to explore what happens when reasoning itself becomes a shared space between human curiosity and machine logic. The result isn’t automation, but amplification: a conversation that expands thought rather than replaces it.

AI helped shape and structure this piece — but not its meaning. Meaning, interpretation, and responsibility remain entirely human.


ree
  1. AI as a new literacy

Using AI effectively is a new literacy. Not a matter of clever prompting, but of learning to think with a system that produces possibilities without understanding them.


This literacy depends on recognising several essential distinctions:


Evidence

AI systems contain vast amounts of factual information learned from their training data, and they can recall these facts with impressive accuracy. However, they do not know these facts in a grounded or verifiable sense: they do not track sources, cannot distinguish authoritative information from unreliable text, and cannot independently validate truth.

Their “factual knowledge” is a statistical memory of patterns, not evidence-based understanding.


Logic

AI can follow logical templates but does not enforce coherence unless instructed by humans.

It can generate contradictions without noticing them.


Causality

While statistical models can infer causal relationships when deliberately designed for that purpose, large language models cannot. They recognise correlations but cannot independently distinguish cause from correlation. Any causal logic they express reflects language patterns, not understanding.


Limits of inference

AI does not construct internal models of the world; it predicts text. Apparent insight is pattern completion, not comprehension. And because AI can operate at far greater speed and scale than any human, existing weaknesses in our reasoning — bias, superficiality, wishful thinking — can be amplified just as easily as our strengths.


Understanding these boundaries is essential. Without them, AI risks becoming not an aid to reasoning, but a distortion of it.

  1. How organisations use AI today

Most companies and governments still use AI for a narrow set of operational tasks:

  • drafting and summarising documents,

  • pattern recognition in data,

  • translations,

  • generating notes or minutes,

  • internal chatbots,

  • extracting entities or insights from text.


These uses save time, but they do not tap the deeper potential of AI as a tool for thinking.



  1. Thinking with AI as a cognitive amplifier

Thinking with AI does not mean delegating cognition.

It means using a system that can expand the search space of your reasoning.


A (large) language model can:

  • generate alternative formulations,

  • expose implicit assumptions,

  • propose edge cases,

  • combine ideas across domains,

  • surface patterns you might not see.


Not because it understands them, but because it samples from a vast distribution of human-written possibilities.

AI functions as a cognitive amplifier: it increases the number and diversity of ideas available for human evaluation.

But amplification is not replacement. The model does not evaluate truth, relevance, or coherence. The human must select, validate, and integrate the results into a meaningful line of thought.

AI expands what we can consider — but it cannot determine what we should think.

In this sense, AI can serve as a thinking companion. Not a mentor, not an authority, not a mind — but a space where reasoning becomes dialogical.


A system that can:

  • question the framing of a problem,

  • challenge the clarity of an idea,

  • propose counterarguments you had not considered,

  • reveal inconsistencies in your own logic,

  • pressure-test your assumptions by adopting different perspectives,

  • and help refine vague intuitions into sharper concepts.


Importantly, this conversational function is not “intelligence” in a human sense.

AI does not believe anything. It does not hold positions. It does not defend arguments out of conviction. But it can simulate disagreement, generate alternative viewpoints, and articulate objections drawn from the vast landscape of data it possesses.

In doing so, it offers something that is surprisingly rare in human life: a tireless, non-egocentric partner for reflection.

A place where ideas can be stretched, reversed, reformulated, and tested —

not to reach a predetermined truth, but to clarify our own thinking. It does not produce truth, but it deepens understanding.


In short, AI can expand what we see, but not what we value. It can broaden the landscape of possible thoughts, but not choose which path we walk. It can help us think more clearly — but only we can decide what is true, meaningful, and morally sound.

  1. The moral dimension of AI use

As AI becomes part of our reasoning process, another question emerges:

how do we preserve our moral values while relying on machine-generated content?


AI is not a moral agent. It has no conscience, no interior life, no sense of harm or responsibility. It can generate moral language — empathy, compassion, dignity —

but it cannot experience the ethical reality behind the words.


Therefore the burden of moral judgment remains entirely human.

It is our filtering, our attention, and our discernment that decide what is acceptable, questionable, or misleading.


A simple analogy clarifies this point:

Corporations also simulate moral language without possessing moral interiority.

Corporate statements often “sound ethical” because they serve reputation, not principle.

AI functions similarly: its moral vocabulary is a simulation shaped by statistical pattern optimisation.


But this analogy is only illustrative.

The central point is not about corporations, but about us.

When interacting with AI, we must remain the source of moral evaluation. AI can assist thought, but it cannot supply conscience. It can amplify reasoning, but cannot provide ethical grounding.

  1. The illusion of morality

AI can produce the vocabulary of empathy, solidarity, or compassion — but it cannot inhabit the experience behind these words. Its moral statements are syntactic rather than ethical:

a fluent impression of meaning without the capacity to feel, doubt, or choose.


True morality — moral being — is uniquely human.

It requires responsibility for one’s judgments and the ability to question one’s own reasoning.


The danger is not that AI lacks morality. The danger is that we might allow its polished language to dull our own ethical attention.


The task is not to make AI moral. It is to ensure that our use of AI remains a moral act —

guided by reflection, grounded in conscience, and aware of the difference between

communication and conviction, automation and awareness, the grammar of virtue and the presence of it.


AI can expand our thinking.

Only we can ensure that thinking remains human.


  1. Turning insight into action: recommendations for public-sector organisations

For public organisations, adopting AI is especially challenging. They must protect sensitive information, comply with strict legal and ethical frameworks, and maintain public trust — all while working within rigid processes. They cannot simply integrate commercial AI tools as the private sector does. They need controlled, secure environments that allow officials to explore AI’s analytical and cognitive benefits without risking data leakage, policy misalignment, or institutional accountability. Distributed review across many desks can be a strength, but only if doubt, refusal, and slowing down are recognised and rewarded as part of good work rather than treated as obstacles to delivery.


These constraints explain why public bodies increasingly seek sovereign, well-governed AI systems: not for automation alone, but to create a safe space where reasoning can be expanded, ideas can be tested, and decisions can be examined from multiple perspectives. In other words, a place where AI can function as a thinking companion — without compromising the principles the public sector exists to uphold.


And this leads to the essential question:

If AI is not a source of truth, not a moral agent, and not a substitute for human reasoning, then what should public organisations do with this knowledge?


Most institutions — particularly in the public sector — risk falling into two traps:


  1. Treating AI as a productivity tool only, focused on drafting emails, summarising documents, or automating routine workflows.

  2. Avoiding AI altogether, fearing errors, hallucinations, or ethical dilemmas.



Both approaches miss the point.


The real opportunity is to use AI not merely for efficiency, but for better thinking across organisations.


Here are practical steps:


Build AI literacy around reasoning, not just tools

Public institutions invest heavily in software training but rarely in cognitive training.

AI literacy should include:

  • understanding evidence vs. pattern memory,

  • distinguishing correlation from causation,

  • recognising model limitations,

  • knowing when human judgment must intervene,

  • exercising ethical and contextual evaluation.


This is literacy not in prompting, but in thinking. And importantly, the EU’s AI Act makes this a legal obligation: Article 4 explicitly requires organisations to ensure “adequate AI literacy for staff and users”, meaning public bodies must build real cognitive competence, not box-ticking training.


Use AI as a structured space for analysis and debate

AI’s value is not only generative but dialectical. Public oganisations could use AI to:

  • test policy assumptions,

  • generate alternative framings of a problem,

  • explore unintended consequences,

  • simulate stakeholder perspectives.


This turns AI into an internal reasoning companion, supporting deeper analysis rather than replacing expert judgment. However, for this to work in practice, time and review capacity must be explicitly budgeted; otherwise, deadline pressure will push teams back toward shallow and purely generative use.


Establish internal guidelines for responsible cognitive use

Not just “AI policies” about data or security, but guidance on:

  • when AI should assist reasoning,

  • when AI must not be used,

  • how to document AI-augmented decisions,

  • how to validate insights produced by models,

  • how to prevent over-reliance on AI .


This brings ethical reflection into daily work rather than treating it as a compliance exercise.


Cultivate organisational reflection

The biggest challenge in the public sector is not technical — it is cultural.


Most public bodies are built for:

  • procedural consistency,

  • risk avoidance,

  • hierarchical decision-making.


AI as a reasoning partner requires:

  • openness to questioning assumptions,

  • willingness to explore counterarguments,

  • a culture that values reflection rather than only compliance.


And it will not happen by rhetoric alone: promotion criteria, performance evaluations, and leadership signals need to reward well-documented doubt, careful refusal to use AI where it is inappropriate, and the ability to explain decisions in human terms — not just the volume or speed of AI-assisted output.


This is the real transformation.


Keep humans at the centre of moral and strategic judgment

AI can expand what public organisations consider, but only humans can decide what is right, responsible, or aligned with public interest.


Public-sector governance must reinforce:

  • human accountability,

  • ethical interpretation,

  • democratic oversight.


The goal is not to make AI moral, but to ensure that our use of AI remains a moral practice.


Comments


bottom of page