- - Sunday, December 9, 2018


By Louis A. Del Monte

Prometheus Books, $19, 320 pages (paper)

Will future wars be waged by robots, with humans spared the agony and bloodshed of the battlefield? And can lethal machines be trusted not to turn on their supposed masters and blast them into oblivion?

For better or worse, the world’s three great powers — the United States, Russia and China — are in pell-mell competition to apply artificial intelligence to the weapons systems of the future. And, for the moment at least, China seems to have the edge.

Louis A. Del Monte paints a truly frightening picture of the likely new shape of warfare before the century ends. Some high-tech luminaries are talking about a “Sputnik moment” in the development of the new weaponry, as significant as the space competition of recent decades.

Indeed, automation of a sort already exists. Drones controlled by technicians sitting at consoles thousands miles away regularly pummel targets on the deserts of the Middle East. Gunnery on naval vessels can lock onto adversaries that are far beyond the horizon.

The U.S. military is approaching the new technology with understandable caution. Mr. Del Monte, a scientist with broad experience both in industry and research, writes that “for the present at least, it still requires a human to decide when a drone makes a kill.”

The next step would be to create what the military terms “autonomous weapons,” which the U.S. Defense Department defines as a “weapon system that once activated, can select and engage targets without further intervention by a human operator.” In military jargon, these weapons are often termed “fire and forget.”

The surge in the development of such weapons is driven, of course, by quantum leaps in computer technology. Although many readers — me included — will scratch their heads over Mr. Del Monte’s descriptions, his message is crystal clear: Robotic weapons are a fact of military life.

One outstanding U.S. development is an “unmanned combat air system” built for the Navy by Northrop Grumman for carrier operations. The plane, designated the X-47B UCAS, can take off and land on carriers with “minimal to no human intervention.” So, too, for its in-flight refueling capacity.

But, as Mr. Del Monte suggests, only policy considerations restrict the plane for using its fire power without human involvement. It is now deployed chiefly on intelligence and reconnaissance missions.

Another valuable robotic weapon in the Navy’s arsenal is the Aegis Weapon System, which is designed for use “from detection to kill.” Aegis technology “integrates computer technology, artificial algorithms, and radar technology [and] does what human being alone cannot do.” As Mr. Del Monte writes, “It is the most remarkable naval defense system in the world.”

Gen. Keith B, Alexander, as director of the National Security Agency and the U.S. Cyber Command, stressed a “proactive” approach in defending against cyber attacks. In a report to Congress in 2010, he singled out the Chinese as the “source of a great many attacks on western infrastructure and the U.S. electrical grid.”

His solution? “I would want to go in and take out the source of these attacks.” (In April 2018 Gen. Alexander was succeeded by Lt. Gen. Paul Nakasone.)

Both the United States and Russia are concentrating upon systems that detect and (hopefully) destroy incoming missiles. But as Mr. Del Monte notes, tests have shown them to be of “questionable reliability.”

On the ground level, Russia has reported tests to develop “a fully automated combat module” incorporating the AK-74 assault rifle, the latest generation of the famed Kalashnikov group. TASS, the government news agency, says the weapon is “fully automated” and incorporates technologies “that enable it to identify targets and make decisions.”

According to TASS, the system consists of a gun connected to a console that analyzes image data to identify targets and make AL [artificial intelligence] decisions over human life and death. Previous Russian claims have referred to “army sentry robots” that could attack intruders without involved human decision-making.

Mr. Del Monte makes plain that such weaponry eventually will exist, and at every level. One question is obvious: Will such developments “make engaging in war more attractive?” No longer will commanders have to write condolence letters to families. “Politically,” he writes, it’s “more palatable to report equipment losses than human casualties.”

A United Nations study group is studying the ethical dilemmas that such weapons systems pose. One fear — prompted by various experiments — is that “smart weapons” will develop a capacity for self-preservation causing them to turn on the persons employing them.

Eventually, it seems obvious that a civilized society must address the question: “Can sophisticated computers replicate the human intuitive decision-making capacity?” Should human beings be taken out of the loop?

• Joseph C. Goulden writes on intelligence and military matters.

Sign up for Daily Opinion Newsletter

Manage Newsletters

Copyright © 2020 The Washington Times, LLC. Click here for reprint permission.

Please read our comment policy before commenting.


Click to Read More and View Comments

Click to Hide