Staff Reporter
In his first formal audience as the newly elected pontiff, Pope Leo XIV highlighted artificial intelligence (AI) as one of the most pressing issues facing humanity today.
“In our time,” Pope Leo stated, “the church provides its social teachings in response to a new industrial revolution and the challenges posed by AI, which threaten human dignity, justice, and labor.”
He referenced the legacy of his namesake, Pope Leo XIII, whose 1891 encyclical Rerum Novarum addressed workers’ rights and the moral implications of capitalism.
Continuing in the spirit of the late Pope Francis, who cautioned in his 2024 peace message that AI, devoid of human values like compassion and morality, could be excessively dangerous if left unchecked, Pope Leo’s comments resonate deeply.
Francis, who passed away on April 21, had called for an international treaty to regulate AI, insisting that this technology must remain “human-centric,” especially in military and governance contexts.
‘Existential Threat’
As concerns grow within both religious and ethical circles, the scientific community shares a similar urgency.
Max Tegmark, a physicist and AI researcher at MIT, has drawn parallels between the onset of the atomic age and the current race to develop artificial superintelligence (ASI).
In a recent paper co-authored with three MIT students, Tegmark introduced the idea of a “Compton constant,” a probabilistic assessment of whether ASI could escape human control. Named after physicist Arthur Compton, who calculated the risks of nuclear tests in the 1940s, this concept highlights the need for caution.
“The companies developing superintelligence must calculate the Compton constant, the odds that we will lose control,” Tegmark told The Guardian. “It’s not enough to feel optimistic. They need to quantify the risks.”
Tegmark estimates a 90% probability that a highly advanced AI could pose an existential threat. His paper calls on AI companies to conduct risk assessments akin to those done before the first atomic bomb test, where Compton estimated the chances of a catastrophic chain reaction at “slightly less” than one in three million.
As a co-founder of the Future of Life Institute and a strong advocate for AI safety, Tegmark believes that calculating such probabilities can foster the “political will” needed for global safety initiatives.
He also co-authored the Singapore Consensus on Global AI Safety Research Priorities, which outlines key research areas: assessing AI’s real-world impact, defining expected AI behavior, and ensuring consistent control over systems.
This renewed focus on AI risk management follows what Tegmark described as a setback at the recent AI Safety Summit in Paris, where U.S. Vice President JD Vance downplayed concerns by stating that the future of AI would not be shaped by “hand-wringing about safety.”
Nevertheless, Tegmark noted a revival in international collaboration: “The gloom from Paris has lifted, and cooperation is back on track.”