Editor's note: This is the first of a two-part series. Laudato Si' And artificial intelligence. The second part is scheduled to be released on April 5, 2024.

I'm a major league disaster. I have a creepy, anxious tendency to expect disaster at every turn. Seeing the glass half empty, he arrives at the departure gate two hours early, planning his vacation with a detailed spreadsheet. Risk management is my superpower. If you or your organization needs someone to predict the worst-case scenario, we'd be happy to help. Fees will be charged on a sliding scale.

So it's no surprise that I'm an “AI destroyer.” I see generative artificial intelligence as an existential threat, despite the saturation level of hype surrounding its potential. And I'm not alone.

Apparently the latest Silicon Valley watercooler chats are focused on the probability of thinking that one's “p” (doom), super-intelligent and super-powerful robots, will wipe out humanity. Some say the threat is “complete.” As a believer, I don't know if I can go that far. However, depending on the day, they may come closer.

If today's headlines are to be believed, there is good reason to be concerned. Because, according to certain experts, we will all eventually worship robots, be destroyed by robots, lose our jobs to robots, or merge with robots until our consciousness is uploaded to the cloud. Because it's going to be either.

God, please help us. As a Catholic, I am grateful that Vatican II asserted that the Church shares in the “sorrows and anxieties” of our time. When it comes to AI, I have a lot of both.

But I've found comfort and encouragement Laudato Si', Pope Francis' 2015 encyclical addresses another existential threat: climate change. Quoting Pope Benedict

And Francis' suggestions for doing this can easily be applied to AI as well. Because climate change and AI pose a strikingly similar danger of concentrating power in the hands of a few multinational corporations. They have created what Francis calls the “technocratic paradigm,” in which “life gradually becomes subject to conditions conditioned by technology, which itself is the primary key to the meaning of existence.” It is regarded”.

These technologically conditioned conditions are developing faster than we can adapt. Laudato Si' The word “acceleration” has even been coined to describe this. New technologies have been part of the human story since we struck two pieces of flint to make a fire, but historically slow progress is now accelerating at dizzying speed.

Twenty years ago, people were still able to function in society without mobile phones. By 2024, refugees seeking safety in the United States will also be forced to apply for asylum via the app. Nowadays, you cannot opt ​​out of modern technology. Laudato Si' It is becoming a radical “counterculture,'' he laments.

Threats to human work

New technology has significantly changed work, sometimes for the better. But now, if prophecies are to be believed, AI threatens to completely destroy our jobs.

Sam Altman, CEO of Open AI (ChatGPT, DALL-E), aims to develop artificial general intelligence (AGI) that is better at many tasks than what he calls “average humans.” His former partner Elon Musk envisions a world where human labor is completely eliminated. And while that may be a relief to those working under the mask of an allegedly tyrannical union-busting boss, it will be a disaster for humanity.that's why Laudato Si” has an entire chapter titled “The Need to Protect Jobs,” which states, “Work is essential, part of life on earth, and a path to growth, human development, and personal fulfillment.” It says “Yes.”

Altman's solution is for AI companies to accumulate the world's wealth and distribute it through a universal basic income to those who hand over biometric data from their eyeballs. Perhaps he will have the support of his long-time mentor Peter Thiel. Along with Mr. Musk, he co-founded a company that became PayPal to establish a wealth-sharing platform beyond government control.

To be fair, Altman does express concern about the fate of humanity if there is nothing left to do but play video games. But other tech titans, including Google co-founder Larry Page, didn't care. When Mr. Musk expressed horror at Mr. Page's indifference to the possibility of AI destroying humanity, Mr. Page called Mr. Musk a “speciesist” (and the New York Times “supports humans over future digital life forms''). (defined as “a person who does something”).

Accusations of speciesism were expected Laudato Si'' and warns against a “technocracy that does not recognize the special value of human beings.'' What is needed to combat this, the encyclical says, is to respect humanity's “unique capacity for knowledge, will, freedom, and responsibility,” and to recognize humanity as the pinnacle of God's creation, created in God's image and love. It is said that it is an “appropriate anthropology'' that understands people as people who are suffering from human suffering. And, being saved, possessing transcendence and beauty, a unique and precious dignity with inherent rights and a vocation to eternal bliss.

We are not just “meat computers” inferior to computers made of chips and circuits, as some AI enthusiasts deride them.

Threat to God's Worship

But creating an AI that is actually better than humans, or at least smarter than us, or, in Altman's words, a “magical intelligence in the sky” is a stated goal of many. As Ilya Sutskever, chief scientist at OpenAI, warned about his X, “If you value intelligence over all other human qualities, you're going to be in trouble.”

Perhaps AI could help solve some of today's biggest problems, including climate change. But its power and capabilities may lead some to treat AI as, in Musk's words, a “digital god” that should not be fought against. Sutskever warns that AI will be “very powerful” and will require “a strong desire to be kind to people.” Eventually, he speculates, we'll probably treat humans the same way we treat animals. That is, they will usually be treated kindly until we get in their way.

Deification of AI is a real threat, as hinted at. Laudato Si': “[T]“People surrounded by technology” know that it is moving toward “mastery over all.” An extreme example is Pathway to the Future, a religion founded by former Google engineer Anthony Levandowski, who created it for “interested people.” In the worship of a God based on AI,” because “what is created from this effectively becomes God.”

At the other extreme are some Christians who fear that AI is the Antichrist. At OpenAI, Sutskever takes a dualistic approach, and he reportedly leads the chant, “Feel the AGI!” At the office holiday party, simultaneously burn a wooden doll representing its “misaligned” doppelganger.

Humans have been creating gods with their own hands for a long time. His one of the Ten Commandments warns against this and is the basis of our moral system. I don't think an actual god is being built in Silicon Valley. But I believe that many people will treat advanced AI like God. An all-knowing oracle, an ever-positive and ever-helpful life coach, and an object of comfort.

“This is something you need to appeal to, not control,” warns Mo Gaudat, former chief business officer at Google X.

Musk aspires that through AI we will be able to “understand the true nature of the universe.” For those who accept it, the beginning of wisdom is no longer the fear of the Lord. It is the enthronement of AI.

Threat to social responsibility

But my concern isn't just about what some people might mistake for a digital god. I'm also worried about the people building it. According to the following people Laudato Si', They are “far removed from the poor,” and through their wealth and technological expertise they exercise “impressive dominion over all of humanity and the world,” and some of them are “far removed from the poor.” The right to think of yourself as more human than other people, as if you were born with greater abilities. ”

I think of Marc Andreesson, the Internet pioneer and great financier. In his “Techno-Optimist Manifesto,'' he proposed that “technology centrally plans and governs humanity's future,'' and promoted “sustainability,'' “social responsibility,'' “trust and society.'' I put a label on it. I view “safety,” “technical ethics,” and even my own superpower, “risk management,” as “enemies.”

I'm also thinking about “effective altruists” who seek to accumulate wealth not to address present suffering, but to build a utopian future, sometimes with eugenic tendencies. Masu. Among them is disgraced crypto tycoon Sam Bankman Freed, who is currently in prison on fraud charges.

That number includes past and present OpenAI board members and its CEO Altman. Thiel, Altman's mentor, argues that democracy and freedom are incompatible, saying, “I want a world where great people are free to exercise their will in society, unfettered by government, regulation, or a 'redistributionist economy.' I'm looking forward to it,” he said.

Is he concerned about “the poor people of today who cannot keep waiting because life in this world is short?” Laudato Si' it is? “No,” he snorted, according to an interview with The Atlantic. “There are enough people working on it,” he asserts, but that’s not the case.

Some may dismiss my fatalism as an overreaction.still Laudato Si' I found the Pope's validation that “end-time prophecies can no longer be treated with cynicism or contempt.” And many expert opinions support this addition to the church's social education.

When Gregory Hinton, the “godfather of AI” and leader of the Sutskever, resigned from Google to warn people about what he had created, he lamented: Laudato Si' “The danger that man will no longer be able to exercise his powers as he should is increasing day by day.”

Goudat's warning against having children because of the threat of AI reflects the encyclical's question: “What kind of world do we want to leave for our growing children?” And Altman and Sutskever's call for an international AI regulatory body supports Francis' call for “a stronger and more effectively organized international body.”

42% of CEOs surveyed believe AI could wipe out humanity within 5-10 years. According to Musk, “the apocalypse could come at any time.” But if that happens, the tech giants will be ready.

Musk originally planned to escape from Earth on a rocket ship to Mars, but then remembered that rogue AI could come after him there, too. Thiel buys a private island in New Zealand, where he is joined by Altman, a former doomsday preparer who has hoarded guns and money.

Thiel's other major protégé, Meta's Mark Zuckerberg, is building a top-secret bunker facility in Hawaii while at the same time working hard to develop his own AI, making today's ultra-wealthy bankers a reality. Participating in construction. But these probably won't matter, according to Altman, because “none of this will help if AGI doesn't work.”

The same goes for that. All I have is a finished basement with a garage.

But the fact that tech giants are preparing to survive the apocalypse while amassing vast fortunes and racing to develop ever more powerful and potentially dangerous AI capabilities is precisely why. . Laudato Si' “In whose hands does this power belong, or will it eventually be obtained? It is extremely dangerous for a small portion of humanity to obtain it.”

Their arrogance, irresponsibility, and greed would be laughable or pitiful if they weren't the richest men in the world who run powerful multinational corporations that sometimes “exert more power than the nation itself.” It may be ignored.

I don't want to be an AI destroyer. I really hope that my fears about the future are proven wrong. Still, I can understand why Francis found it “troublesome” to write. Laudato Si'And I appreciate his understanding that because of the onslaught of technology, “people no longer seem to believe in a happy future.”

“Yet all is not lost,” Francis encouraged, saying we can “admit our deep grievances” and “start a new path to true freedom.” By coming together, “we can create resistance to the attacks of the technocratic paradigm.”

This call to action offers hope to this deeply disaffected doomer. I am ready to resist and stand up to those who, like Francis, seek to reboot our relationship with technology, shape a brighter future, and find new paths to freedom.

We believe there are many risks with AI. I want to see it managed.



Source link