- The Washington Times - Friday, April 20, 2007

What happens when military weapons become automated, autonomous and robotic and begin to make decisions formerly made by people? This is happening.

For years, computers in weapons have been making more and more decisions, in part because human reflexes and thinking are not fast enough.

When something flies toward a warship, appearing suddenly low over the horizon and moving at supersonic speed, people do not have time enough to try to decide what it is and whether it should be destroyed. Consequently computers make the decision to fire or not.

The design of weaponry has increasingly moved toward unmanned systems. In the first place, they do not risk the lives of soldiers. Further, unmanned weapons can be much cheaper than the manned variety.

For example, much of the expense of a tank goes into armor to keep the crew alive, large engines to handle the weight of the armor, and so on. These considerations, plus advances in sensors and computers, are a strong incentive to take people out of the tank.

Unmanned does not mean autonomous: An unmanned tank could send back video of its surroundings and a human operator could decide what to shoot. This sort of remote control is done with some reconnaissance drones. But completely autonomous vehicles, which would choose their own targets, are moving toward practicality.

What rules, built into the weapons themselves, should control what they attack? This sounds like something out of Isaac Asimov, but it is now a serious concern.

How does a missile, many miles from the ship that launched it, tell a warship from a cruise ship? In a political climate that does not well tolerate civilian casualties, the outcome of a war could depend on the missile’s decision.

From the Naval Surface Weapons Center at Dahlgren, Va., comes a curious paper dealing with exactly this question.

In the words of author, John Canning, an engineer at Dahlgren, “Real-time media coverage has brought the destruction of war to the ‘living room’ and has added to the political reactions and a possible perception of excessive civilian causalities.”

The military has, of course, recognized for many years that dead children are bad PR, but autonomous weapons operating unsupervised could make the problem much worse.

Some form of control is needed: “The widespread utilization of armed fully autonomous unmanned systems will be impossible, from cost and performance standpoints, without it.” Note that the author seems to regard such widespread utilization as militarily desirable. It is, I think, an example of technological inevitability.

Mr. Canning asks, “What happens if the enemy spoofs our armed unmanned systems, and causes them to kill when they shouldn’t? Political support can disappear virtually instantaneously.”

His answer: “Let the machines target other machines, not people. Specifically, let’s design our armed unmanned systems to automatically ID, target, and neutralize or destroy the weapons used by our enemies — not the people using the weapons. This gives us the possibility of disarming a threat force without the need for killing them. We can equip our machines with nonlethal technologies for the purpose of convincing the enemy to abandon their weapons prior to our machines destroying the weapons, and lethal weapons to kill their weapons.”

Soon we may see robotic weapons setting off on their own, choosing targets so as, one hopes, to avoid killing civilians.

I suspect that it’s a technological fix that isn’t going to work, but maybe I’m wrong. We’ll see.

LOAD COMMENTS ()

 

Click to Read More

Click to Hide