
USA, OMIKAMI -TV - Chinese President Xi Jinping and U.S. President Joe Biden agreed late in 2024 that artificial intelligence (AI) should never be empowered to decide to launch a nuclear war. (01/03/2025).
The groundwork for this excellent policy decision was laid over five years of discussions at the Track II U.S.-China Dialogue on Artificial Intelligence and National Security convened by the Brookings Institution and Tsinghua University’s Center for International Security and Strategy.
By examining several cases from the U.S.-Soviet rivalry during the Cold War, one can see what might have happened if AI had existed back in that period and been trusted with the job of deciding to launch nuclear weapons or to preempt an anticipated nuclear attack—and had been wrong in its decisionmaking.
Given the prevailing ideas, doctrines, and procedures of the day, an AI system “trained” on that information (perhaps through the use of many imaginary scenarios that reflected the current conventional wisdom) might have decided to launch nuclear weapons >, with catastrophic results.
"My Journey on the Nuclear Brink" is a continuation of William J. Perry's efforts to keep the world safe from nuclear disaster.
The film tells the story of his coming of age in the nuclear age, his role in efforts to shape and contain it, and how his thinking has changed regarding the threat posed by these weapons.
In his extraordinary career, Perry has dealt head-on with changing nuclear threats. Decades of experience and special access to classified knowledge regarding strategic nuclear options have given Perry a unique and chilling perspective to conclude that, "Nuclear weapons do more to endanger our security than to safeguard it," William J. Perry's simulants.
Thankfully, in the examples I will consider—the 1962 Cuban missile crisis, the September 1983 false-alarm crisis, and the October 1983 Able Archer exercise—a human being showed greater awareness of the stakes and better common sense than the received wisdom and doctrine of the day. It would be imprudent to assume that humans will always show restraint > in such situations, and that well-trained AI systems may provide useful inputs to human decisions.
"Environmental tragedies like Chernobyl and Exxon Valdez remind us that catastrophic accidents are always possible in a world of dangerous technologies," wrote the 1993 Best Book Award Winner, Science, Technology, and Environmental Studies Section of the American Political Science Association.
"Nuclear weapons' seemingly excellent safety record, however, has led academics, policymakers, and the general public to believe that nuclear weapons can serve as a safe deterrent for the foreseeable future," said Scott Douglas Sagan, provocatively challenging that optimism.
Sagan's research into previously classified archives pierces the veil of safety that has shrouded U.S. nuclear weapons and reveals a hidden history of terrifying "near-disasters."
Nevertheless, it is sobering that when facing the real possibility of nuclear Armageddon, human beings exhibited a level of thoughtfulness and compassion that machines, trained in cold-blooded “rational” ways, might not have possessed at the time and might not possess in the future.
Three close calls with nuclear Armageddon
The Cuban missile crisis began when U.S. intelligence learned that the Soviet Union was shipping nuclear-capable missiles and tactical nuclear weapons to Cuba over the course of 1962, in an attempt to improve the nuclear balance with the United States. Even though they did not know the extent to which the Soviets already had nuclear weapons on Cuban soil, almost all of President John F. Kennedy’s advisors, including the Joint Chiefs of Staff, recommended conventional air strikes against the Soviet positions.
Such strikes could easily have led to Soviet escalation >, perhaps by nearby Soviet submarine commanders (armed with nuclear-tipped torpedoes) against U.S. warships, or by Soviet ground troops in Cuba (perhaps against the U.S. base at Guantanamo Bay). Kennedy opted for a combination of a naval quarantine of Cuba (to prevent any more weaponry from reaching the island by sea) and quiet backdoor diplomacy with Soviet Premier Nikita Khrushchev that included offers to remove American missiles from Turkey and to never invade Cuba.
The Soviets were persuaded to take this deal, withdraw their missiles and nuclear weapons from Cuba, and halt any further military buildup on the island, then run by Fidel Castro and his government.
"In October 1962, at the height of the Cold War, the United States and the Soviet Union nearly came to nuclear conflict over the deployment of Soviet missiles in Cuba. In this hour-by-hour chronicle of those tense days," veteran Washington Post reporter Michael Dobbs says, "just how close we came to the end of the world."
"Here," he continues, "is the first gripping account of Khrushchev's plan to destroy the U.S. naval base at Guantánamo; the Soviet handling of nuclear warheads in Cuba; and the extraordinary story of the U-2 spy plane that disappeared over Russia at the height of the crisis," says Dobbs, who has taught at leading American universities, including Princeton.
Written like a thriller, "One Minute to Midnight" is a deeply researched account of what Arthur Schlesinger, Jr., called "the most dangerous moment in human history," and the definitive book on the Cuban missile crisis.
In the September 1983 false-alarm crisis, a single Soviet watch officer, Stanislav Petrov, > saw indications from sensor systems that the United States was attacking the Soviet Union with five intercontinental ballistic missiles (ICBMs) that would detonate within perhaps 20 minutes.
“He was right,” a family friend said in mid-September. “The satellites had mistaken the glare of the sun off the clouds for missiles.”
In fact, what the sensors had picked up were reflections of sunlight from unusual cloud formations; the sensors were not “smart” enough to recognize the reflections for what they really were. Realizing that any American attack on the Soviet Union would almost certainly be much larger—since a small attack would only provoke a Soviet retaliation and have little chance of causing meaningful damage to Soviet nuclear forces—Petrov single-handedly chose not to escalate the situation by recommending “retaliation” against the perceived American strike.
Responding to news of Petrov’s death, Rep. Adam Schiff (D-Calif.) tweeted, “Moments of nuclear tension demand careful self-control. You may not know Stanislav Petrov, but at the height of the Cold War, he saved the world.”—ALICIA SANDERS-ZAKRE
Whether an AI system would have reached that same prudent conclusion, when prevailing doctrine said that any incoming attack likely required immediate retaliation, is anyone’s guess. In this case, the actual human being improvised, using instinct more than formal protocol, to arrive at the correct decision when faced with the unthinkable possibility of an actual nuclear war. Petrov’s basic human essence and character seem to have saved the day, at least in this case.
Just a couple of months later in November 1983, NATO undertook a major military exercise known as Able Archer > during a very tense year in U.S.-Soviet relations. President Ronald Reagan had given his “Star Wars” speech the past March, soon after declaring the Soviet Union an evil empire; then, in September, Soviet pilots shot down Korean Air Lines Flight 007 when it mistakenly strayed over Soviet territory, killing everyone on board. The United States was also in the process of preparing to station nuclear-capable Pershing II missiles in Europe, with a very short flight time to Moscow if ever launched.
So, when NATO conducted Able Archer, Soviet leaders worried that it might be used as cover to prepare a very real attack, perhaps with the aim of decapitating the Soviet leadership. At one point in the exercise, NATO forces simulated preparing for a nuclear attack by placing dummy warheads on nuclear-capable aircraft. Soviet intelligence witnessed the preparations but could not tell, of course, that the warheads were fake.
Soviet leaders thus “responded” by readying nuclear-capable systems with very real warheads of their own. American intelligence in turn witnessed those preparations—but a savvy U.S. Air Force general, Leonard Perroots, realized what was occurring and recommended to superiors that the United States should not respond by placing real warheads on its own systems.
"Whether doing so would have provoked one side or the other to launch a preemptive strike is anyone’s guess; however, the proximity of the weapons to each other, and mutual fears of a decapitating surprise attack, would have made any such situation extremely fraught," explains Leonard Perroots.
“In response to this exercise, the Soviets readied their forces, including their nuclear forces, in a way that scared NATO decision makers eventually all the way up to President [Ronald] Reagan,” says Nate Jones, author of Able Archer 83: The Secret History of the NATO Exercise That Almost Triggered Nuclear War and a senior fellow at the National Security Archive.
Would AI have done better?
In all three cases, AI might have elected to start a nuclear war. During the Cuban missile crisis >, American officials considered the Western Hemisphere to be a sanctuary from hostile powers, and the consensus view was strongly in favor of preventing any Soviet, or communist, encroachment.
The year before, the United States through the CIA had attempted to work with Cuban exiles to overthrow Castro. Certainly, the positioning of Soviet nuclear weapons less than 100 miles from U.S. shores triggered prevalent American thinking about what was and was not acceptable.
Since no sensors could determine the absence of Soviet nuclear warheads, a “cautious” approach based on the doctrine of the day would indeed have been to eliminate those Soviet capabilities before they could be made operational. Only a very real American president—one who had heightened cautionary instincts after witnessing combat in World War II and watching the U.S. bureaucracy make a mess out of the Bay of Pigs attack on Cuba the year before—thought otherwise.
This example shows that the ban on AI starting a nuclear war should include cases in which conventional weapons might be used to strike nuclear-capable weapons or weapons systems.
With the false-warning crisis in September 1983, it took an astute individual to realize the unlikelihood that the United States was attacking with just a few warheads. Indeed, a different officer, or an AI-directed control center, would likely have assessed that the five ICBMs were attempting a decapitation strike against leadership or could otherwise have drawn the wrong conclusion about what was going on. The result might well have been a “retaliatory” strike that was in fact a first strike, and that would have likely produced a very real American nuclear response.
With Able Archer, since American officials knew that they were only conducting an exercise, and knew that the Soviets knew as much, many would have been stunned to see the Soviets put real warheads into firing position. Most might have concluded that the Soviets were using the NATO exercise as a way to dupe NATO > officials into lowering their guard as the Soviet Union prepared a very real attack.
"AI systems trained on the prevailing doctrines and standard procedures of the day would have likely recommended at the very least an American nuclear alert. And since both superpowers had plans for massive first strikes in those days, designed to minimize the other side’s potential for a strong second strike, a situation in which both sides had nuclear weapons on the highest wartime alerts could have been very dangerous," said Able Archer >.
"Yes, it is possible that very good AI might have determined restraint was warranted in these cases—and might do so in a future situation—perhaps even better than some humans would have. AI can be used as a check on human thinking and behavior. But these examples underscore how dangerous it could be to trust a machine to make the most momentous decision in human history. Xi and Biden made the right decision, and future leaders should stand by it," he concluded.
(Michael E. O’Hanlon) OMIKAMI-TV 

Director of Research - Foreign Policy, Director - Strobe Talbott Center for Security, Strategy, and Technology, Co-Director - Africa Security Initiative, Senior Fellow - Foreign Policy, Strobe Talbott Center for Security, Strategy, and Technology, Philip H. Knight Chair in Defense and Strategy
Tidak ada komentar:
Posting Komentar