Artificial intelligence (AI) specialists have warned that China, the United States and other leading players must seek common ground in regulating the military use of the technology to minimise the risk of nuclear escalation.
“There is going to be competition [in AI], that is no question, but there also has to be important cooperation,” said John Tasioulas, director of the Institute for Ethics in AI at Oxford University and a member of a committee that advises the Greek prime minister on AI.
“A big one has got to be: we need to make sure that AI’s use in the military sphere is under human control, especially with respect to nuclear weapons,” Tasioulas said in an interview in Oxford this month.
“The annihilation of the human race in a nuclear war is much more likely than annihilation of the human race by robots.”
Many countries are increasingly adopting AI for military use, including the known nuclear countries: China, the US, Russia, Britain, France, India, Pakistan and North Korea.
For example, the value of Pentagon’s AI-related contracts rose from US$261 million to US$675 million between 2022 to 2023, according to a report last year by Washington-based think tank the Brookings Institution.