As world leaders gathered in New York for the annual high-level meetings at the United Nations last week, they discussed very real problems of war and disaster. But they also began taking a serious look at an issue that, for now, remains largely theoretical: The danger posed to humanity by artificial intelligence.
“Generative artificial intelligence holds much promise — but it may also lead us across a Rubicon and into more danger than we can control,” U.N. Secretary General António Guterres told the assembled world leaders as he opened the summit. He noted that only two world leaders had mentioned AI when he first appeared as U.N. chief in 2017. “Now AI is on everyone’s lips — a subject of both awe and fear,” he said.
The sudden visibility of AI over the past year has turned heads. It’s already in use in war zones including Ukraine, and there are serious fears the technology will upend the livelihoods of everyone from car manufacturers to Hollywood writers. As Today’s WorldView has noted, many see this as an “Oppenheimer” moment — a reference to Robert Oppenheimer, the American physicist who led the creation of the atomic bomb.
After making the bomb, Oppenheimer became a proponent of nuclear nonproliferation and worked with the nascent United Nations. Now, more than half a century later, industry and government officials alike are again looking to the world’s top multilateral institution, the United Nations, for leadership.
Guterres has taken up the challenge. At the United Nations last week, the secretary general continued his push for a High-Level Advisory Body on Artificial Intelligence, with the aim of eventually establishing a U.N. agency devoted to AI — a request made most prominently by OpenAI CEO Sam Altman, an American AI researcher sometimes compared, often by himself, to Oppenheimer. Altman has suggested the International Atomic Energy Agency (IAEA) could serve as a model for the global coordination of AI governance.
But for those watching the global nuclear debate over recent years, the comparison with AI may not be reassuring. More than 65 years after the creation of the IAEA, the war in Ukraine and the sudden increase in nuclear tensions it created has drawn into question whether a fractured and divided United Nations is serving its purpose. Why would AI be any better?
The U.N.’s plans for AI are still in an early stage, but they are expected to move fast. Applications to join the High-Level Advisory Body are already running into the thousands. The aim is to form the board by October, so it can prepare its final report with recommendations by September 2024, when Guterres is hosting the “Summit of the Future” at next year’s high-level U.N. meeting.
Already, there are signs of division. The idea of an IAEA model for the AI regulation has support due to the nuclear watchdog’s history of working and fostering cooperation on issues. However, some within the U.N. system don’t believe that the IAEA, with its focus on physical nuclear material, provides the right model for safeguarding a digital, intangible technology like AI.
Other potential models have been suggested, including the Intergovernmental Panel on Climate Change, which focuses more on expert advice. Some believe that there is not necessarily a need for a new agency at all. Aki Enkenberg, team lead for innovation and digital cooperation at Finland’s Ministry for Foreign Affairs, told Time Magazine recently it looked like a “hasty move” to insist on a new agency when existing bodies might work.
The challenge posed by AI is complicated by the still-uncertain nature of its impact and “possible pathways” through which AI could threaten humanity. “It took decades to build an effective system of control for atomic energy even with a common view of the risk,” Ian Stewart, executive director of the James Martin Center in Washington and a former official with the U.K. Ministry of Defense, wrote for the Bulletin of the Atomic Scientists in June. The dominance of the private sector in AI aggravates governance challenges, he added.
It is also being shaped not by academic researchers, but tech company upstarts — people who likely have very different values than the U.N. diplomats sitting in Turtle Bay, and much greater power. Is Altman, for example, really keen to cede control of AI to the United Nations? His sternest critics say no, he’s just cynical.
“You say, ‘Regulate me,’ and you say, ‘This is a really complex and specialized topic, so we need a complex and specialized agency to do it,’ knowing damn well that that agency will never get created,” tech writer and podcaster Jathan Sadowski was quoted as saying in a sharp New York Magazine profile of Altman. “Or if something does get created, hey, that’s fine, too, because you built the DNA of it.”
The dichotomy between political leaders and tech leaders was highlighted by a surprising world leader voice during an event hosted by Elon Musk in San Francisco before the U.N. summit.
“You have these trillion-dollar [AI] companies that are produced overnight, and they concentrate enormous wealth and power with a smaller and smaller number of people,” said Israeli Prime Minister Benjamin Netanyahu, a conservative icon and strong supporter of free markets. “That will create a bigger and bigger distance between the haves and the have-nots, and that’s another thing that causes tremendous instability in our world. And I don’t know if you have an idea of how you overcome that?”
While dealing with these new “superintelligence” issues, the U.N. will still have to deal with the all-too-familiar problem of geopolitical divides. An earlier AI-related push, known as the Campaign to Stop Killer Robots, was founded without the support of major countries like the United States. This time, Russia has indicated it will not support the creation of a new U.N. agency to tackle AI, undermining any potential consensus that Guterres hopes to build.
At several side events in New York last week, there was also concern that AI was becoming another area of divide between wealthy nations and the Global South. Speaking at the New York Public Library, Nigerian Communications Minister Olatunbosun Tijani noted that “even the conversation on governance [on AI] has been led from the West.”
The recent lessons from nuclear governance are hardly encouraging. Since the war in Ukraine began, there has been a breakdown in order, with deep divisions among the permanent members of the U.N. Security Council — all of whom are nuclear-armed. The IAEA has found itself, quite literally, in the middle of fighting in Ukraine, while key deals like the multilateral Treaty on the Non-Proliferation of Nuclear Weapons and the bilateral New START are facing huge divisions among member states.
Nonnuclear countries pushing for disarmament agreements, such as the 2017 Treaty on the Prohibition of Nuclear Weapons, have not only found little support for such efforts by the governments of nuclear-armed countries but also seen little evidence that the aim of delegitimizing nuclear weapons use is succeeding. “We seem incapable of coming together to respond” to existential threats, Guterres said in his speech last week.
AI could prove to be a giant leap for the U.N. system, helping countries fight major issues like poverty and hunger. However, the disruption AI causes might one day make nuclear weapons look like firecrackers. The United Nations will have to battle not only self-interested world powers but also world-making tech barons. It’s a fight it can’t afford to lose.