The decision to unleash nuclear weapons should not involve artificial intelligence in any way, according to UN Secretary-General António Guterres.
He stated that “humanity is on a knife’s edge” and that the risk of nuclear war has reached “heights not seen since the Cold War” during an Arms Control Association (ACA) conference.
According to Guterres, “States are involved in a qualitative arms race.”
Artificial intelligence and other technologies are increasing the risk. A new form of nuclear blackmail has surfaced in the form of a dangerously imminent nuclear disaster. In the meantime, the system intended to stop nuclear weapons from being used, tested, and spread is deteriorating. Friends, disarmament is urgently needed.”
According to him, nations ought to disarm, and those with nuclear weapons ought to lead by example. What he said can be seen here:
In a statement, he called on the “United States and the Russian Federation to get back to the negotiating table, fully implement the new START treaty, and agree on its successor” .
“Until these weapons are eliminated, all countries must agree that any decision on nuclear use is made by humans; not machines or algorithms.”
The warning from Guterres is framed by current worries about AI’s involvement in the hostilities in Gaza and Ukraine.
IFL Science claims that although the concept of a nuclear-armed AI may seem remote, historical evidence indicates that any automated system might potentially pose these kinds of risks.
In order to guarantee nuclear retribution in the unlikely event that a nuclear attack killed the majority of their command structure, the Soviet Union created what they called “dead hand” during the Cold War. To do this, the system keeps an eye out for different indicators of a nuclear assault. If verified, the system would examine the channels of communication between senior Soviet authorities and, should that fail, would assign launch permission to lower-level operators within a bunker.
A near-miss on September 26, 1983, when a Soviet missile detection system mistakenly signaled an impending nuclear strike from the United States, brought attention to the technology’s vulnerabilities even though it was never employed. Luckily, Stanislav Petrov, a Soviet military officer, prevented a reaction at that particular moment. Subsequent investigation revealed that the satellite data had misunderstood sunlight that was glinting from high-altitude clouds.