- The Washington Times - Thursday, February 16, 2023

The Biden administration on Thursday announced a proposed arms control agreement to limit the use of artificial intelligence and autonomy for the world’s military forces and weapons systems.

The State Department’s arms control unit released the proposed declaration calling for states to conduct legal reviews to ensure that military weapons powered by robots will adhere to international law and international humanitarian customs.

“The declaration consists of a series of non-legally binding guidelines describing best practices for responsible use of AI in a defense context,” the department said in a statement.

“We view the need to ensure militaries use emerging technologies such as AI responsibly as a shared challenge,” the statement said. “We look forward to engaging with other like-minded stakeholders to build a consensus around this proposed declaration and develop strong international norms of responsible behavior.”

The Biden administration has made arms control a major element of its national security policies at a time when both China and Russia have expressed no interest in restraining their arms programs. China’s military is said to have adopted a policy called “unrestricted warfare” that calls for imposing no limits on military forces during war.

The declaration was unveiled at the Summit on Responsible Artificial Intelligence in the Military Domain, known as REAIM 2023, which concluded Thursday in the Netherlands. The conference was co-hosted by the Dutch and South Korean governments.

Artificial intelligence-powered weapons under development by the Pentagon use sensors and computer algorithms that can independently identify targets and use on-board computers to destroy them without human controls. U.S. military and defense leaders have said that artificial intelligence weapons may be built in response to similar weapons being built by China and Russia.

The Pentagon’s most recent annual report on the Chinese military said the People’s Liberation Army “is pursuing next generation combat capabilities based on its vision of future conflict, which it calls ‘intelligentized warfare,’ defined by the expanded use of artificial intelligence (AI) and other advanced technologies at every level of warfare.”

Russia announced last year that its Defense Ministry had set up a department dedicated to developing weapons powered by artificial intelligence.

That announcement followed the Pentagon’s establishment of the Joint Artificial Intelligence Center led by Chief Information Officer John Sherman.

The State Department declaration says a growing number of nations are building AI weapons, raising questions of who controls deployment and firing decisions and whether AI systems can go beyond the wishes or intentions of their designers.

“Military use of AI can and should be ethical, responsible, and enhance international security,” the declaration states, calling on nations to maintain human control over nuclear weapons, and make sure senior officials remain in charge of the use of AI weapons.

Other provisions say the world’s militaries should adopt and publish principles for AI weapons design and “minimize unintended bias” in the high-tech weapons.

The development of AI weapons should also seek to minimize the risk that the robot arms could not be halted if they “demonstrate unintended behavior.”

A 2017 report on artificial intelligence arms by Harvard’s Belfer Center warned that AI is revolutionizing warfare and espionage and ultimately could destroy humanity. According to the government-sponsored study, AI, a subset of machine learning, is advancing much faster than expected and will provide military leaders with powerful high-technology weaponry and spying capabilities.

The range of AI weapons potentially includes robot assassins, superfast cyberattack machines, driverless car bombs, and swarms of small explosive “kamikaze drones,” the report said.

“Speculative but plausible hypotheses suggest that General AI and especially superintelligence systems pose a potentially existential threat to humanity,” states the 132-page report by Gregory C. Allen and Taniel Chan for the director of the Intelligence Advanced Research Projects Activity, the U.S. intelligence community’s research unit.

• Bill Gertz can be reached at bgertz@washingtontimes.com.

Copyright © 2023 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.

Click to Read More and View Comments

Click to Hide

Sponsored Stories