Artificial intelligence as seen by two popes
Artificial intelligence as seen by two popes
Antonio Spadaro

Artificial intelligence as seen by two popes

As artificial intelligence reshapes work, culture and decision-making, two pontificates converge on a deeper concern – not technological progress itself, but the risk of reducing human life to efficiency, calculation and control.

In the handover between Pope Francis and Pope Leo, one theme stands out as genuinely decisive – perhaps more than is apparent to those who focus exclusively on ecclesiastical affairs.

At the heart of a moment that seems to be sliding toward an increasingly automated reality, Francis left his successor an open question that cuts to the core of our contemporary imagination: artificial intelligence. In this transition between pontificates, a conversation unfolds that concerns technology only indirectly. Its real subject is the meaning of being human in an age of thinking machines.

In their words – and those spoken during the first eight months of Leo’s pontificate – are already telling. There is no nostalgia for a lost world, no demonisation of progress. What emerges instead is an effort to think critically about the landscape now taking shape. To look inside the machine and ask who, in the end, truly occupies its centre of gravity.

Pope Francis brought into the global public debate a reflection on AI that was neither narrowly technical nor purely moralistic. From the outset, he acknowledged its positive potential. For him, artificial intelligence was not merely a threat; it was also a possibility.

For example, AI offers the potential to ease the burden on human labour, democratise access to knowledge, and foster encounters among people and cultures through automatic translation, data analysis, and neural networks capable of processing billions of pieces of information per second.

Toward the end of his pontificate, Antiqua et Nova, an important note on the relationship between artificial intelligence and human intelligence, was issued jointly by the Dicastery for the Doctrine of the Faith and the Dicastery for Culture and Education.

Francis repeatedly insisted on one essential point: AI is not neutral. It is a powerful instrument, and like all power, it carries the risk of manipulation, inequality, and violence. He spoke of “cognitive pollution,” a striking expression that captures the effects of a digital communication ecosystem increasingly governed by optimisation and calculation.

Fake news, deepfakes and the manipulation of public opinion are not accidents; they are symptoms of a deeper crisis of truth. Artificial intelligence can become the perfect weapon for those intent on bending reality to fit a functional narrative. In his final encyclical, Dilexit nos, he wrote: “In the era of artificial intelligence, we must not forget that poetry and love are necessary in order to save what is human.”

Pope Leo XIV has taken up this legacy and amplified it, beginning with the very choice of his name. Addressing the cardinals who elected him, the newly chosen pontiff explained his decision: “Pope Leo XIII, with the historic encyclical Rerum Novarum, confronted the social question in the context of the first great Industrial Revolution. Today, the Church offers to all her treasury of social doctrine in order to respond to another industrial revolution and to developments in artificial intelligence, which pose new challenges for the defense of human dignity, justice, and labour.”

In his subsequent interventions, one idea emerges with clarity: AI can never replace what is specifically human – moral conscience, discernment and authentic relationship with others. The machine can imitate but not understand; it can process but not judge; it can learn but not love. Here lies the increasingly thin boundary between simulation and reality.

Artificial intelligence has reached a point where it can generate coherent texts, realistic paintings, and complex musical compositions. It can simulate dialogue, correct grammatical errors, and even produce literary commentary. The paradox is that the more convincingly the machine imitates a human, the more the human risks losing itself.

What does it mean, then, to be a person in a world where a machine can write an essay on love or compose a poem about absence?

Pope Francis insisted on the need for a “wisdom of the heart” that cannot be encoded. He emphasised the urgency of developing an ethics of artificial intelligence that places the dignity of the human person at its centre – not as abstract rhetoric, but as a line of resistance: the person as a non-computable, non-substitutable value.

This also means that AI must remain at the service of the human, not replace them. “Not everything that is technically possible is morally acceptable,” he wrote.

Pope Leo XIV, for his part, has rejected every transhumanist seduction and every temptation to view technology as an unlimited extension of humanity. The machine, he has said, can assist but cannot redeem. Only the human being can open itself to the ultimate questions of existence; only the human can orient itself toward truth and the good. True intelligence is not the one that analyses data but the one that chooses responsibly, with conscience. In a word: the one who discerns.

At a time when algorithms decide who will see what, who will obtain a loan, and who will be selected for a job interview, ethics can no longer be a luxury. Pope Francis explicitly called for a binding international treaty to regulate the use of artificial intelligence – not only to prevent abuses, but to foster responsibility.

He urged that public debates include the voices of the excluded: the poor, migrants, children, and those who lack access to technology yet bear its consequences.

Pope Leo XIV has echoed this appeal by calling for multilevel governance of AI, inspired by the principles of the Church’s social doctrine yet translatable into secular, shareable terms. In this sense, he invokes the concept of tranquillitas ordinis, the “tranquility of order,” proposed by Augustine in The City of God. It is not enough to regulate AI; its purposes must also be regulated. The machine cannot be left alone to set the agenda.

Both pontiffs see the danger not only in technology itself, but in the worldview it embodies – a vision that risks reducing the complexity of the human to a problem of efficiency.

For Pope Leo XIV, the machine must do more than function; it must contribute to a more humane order of social relations. The goal of AI cannot be performance alone, but justice. Not efficiency alone, but communion.

In an age that dreams of “augmenting” the human through technology, the real risk is ending up with a diminished humanity, impoverished in its capacity for judgment, relationship, and wonder.

Hence, the shared urgency – felt by both pontiffs – of an education in critical thinking, responsibility, and care. Ultimately, the real question is not what artificial intelligence can do, but what we want to do with it. And, above all, who we want to be.

In the short story Hako Otoko — The Box Man — published in 1967, the Japanese writer Kōbō Abe imagines a future in which human beings, to avoid pain, allow themselves to be replaced by artificial simulacra. Or rather, they voluntarily shut themselves inside a box, becoming simulacra themselves.

The box man is, in a sense, a proto-avatar: a body that no longer communicates directly but filters reality. Whenever life becomes too painful, the avatar takes its place. Ultimately, no one remembers who the original was. The ghosts have entered the machine.

Perhaps we are at that crossroads today. And perhaps this is why the voices of two pontiffs – two teachers of humanity – resound so forcefully in an age that imagines it can save itself through code. AI is here to stay. But we human beings are here to engage with it through our questions, our errors, and our freedom.

At a time when, in the Middle East, an army bombed civilian populations by allowing an artificial intelligence – tellingly named “Gospel” – to select its targets, we are forced to recognise how deeply we need what no machine will ever be able to learn, no matter how sophisticated its code: compassion.

*The views expressed in this article are those of the author and do not necessarily reflect the official editorial position of UCA News from which the article is republished.

The views expressed in this article may or may not reflect those of Pearls and Irritations.

Antonio Spadaro

Please support Pearls and Irritations

This year, Pearls and Irritations has again proven that independent media has never been more essential.
The integrity of our media matters - please support Pearls and Irritations.
click here to donate.