- The Washington Times - Friday, June 29, 2018

ANALYSIS/OPINION:

Two-and-a-half years ago, technology wizard and Stanford University Master of Science in Computer Science graduate David Gunningjoined with DARPA, the Defense Advanced Research Projects Agency, to manage a program to develop explainable artificial intelligence.

And listen up: The XAI arena, as it’s abbreviated, is where we want to head — this is where technology development ought to focus.

Why?

XAI is the common-sense older brother in a digitized world filled with flashy, privacy-invading, data-gobbling gadgets and machine-controlling bullies.

The goal of XAI, Gunning said, in a recent telephone conversation, is not so much to “take human thinking and put it into machines,” as nearly all of today’s artificial intelligence seeks to do. Rather, XAI’s aim is to equip the machine with the ability to tell its human operators why it arrives at the conclusions it does — to make the machine explain itself, so to speak.

That means humans still stay at the helm. They’re not replaced by computers. Humans are the masters; machines are the tools. And what a tool XAI will prove, once developed.

Frankly, some of today’s A.I. pursuits require so much data collection and analysis, and spit back so much information, that those tasked with deciphering the results can become quickly overwhelmed. For example: Say your job is to sift through National Security Agency video and satellite feeds to find security risks, using artificially intelligent programs to help red-flag behaviors that go against the norm. The results could hit in the hundreds — thousands. So as an analyst, how to determine which alarms are real, which are false?

Enter XAI, giving reasons for the red flags.

Suddenly, half those alarms — maybe more, maybe much more — can be booted from the batch, tossed to the side because the analyst is able to see, for instance, that hey, that truck in the video that was red-flagged for cutting across that strip of roadway wasn’t actually carting bomb-making materials to some secret destination. No, the driver was simply being diverted from the normal road route due to a construction sign.

The A.I.-fueled alert on that truck was a false positive. Thanks to XAI charms, the analyst knows this.

Now multiply such false alarms by hundreds, thousands, even millions of bits of data, and suddenly, you’ve got a method of machinery that can help an analyst cut through the useless and whittle to the pertinent. What a time-saver; what a manpower-saver.

This is great news on so many fronts.

“Analysts have to put their names on recommendations, but they don’t always understand why a recommendation to red-flag came,” Gunning said.

Understanding breeds trust. XAI arms a person with the knowledge of “why and when to trust,” he said.

Understanding also helps determine inherent bias — one of the biggest challenges confronting A.I. today.

“If there’s bias in the training [model], the system will learn that bias,” Gunning said. “Now [with XAI], you have an explanation of bias.”

That means analysts can counter the bias by determining if it’s justified, before they recommend action. What a consolation; what a relief.

DARPA isn’t the only entity working on explainable artificial intelligence. Researchers at the University of California, Berkeley, and with the Georgie Institute of Technology, to name a couple, have been trying out different software approaches to give neural networks the ability to explain themselves, so to speak.

But whoever cracks this code, whether with DARPA or a civilian outfit, will be providing a great service to technology.

Think of it: Would you rather your surgeon base an operating decision on radiology scans powered by artificial intelligence that simply collects data and shows the common denominators — or by scans that have been filtered for biases, held to the fires of accountability, and analyzed as to the reasons why that invasive procedure is truly the best course of action for your particular medical circumstance?

Right. It’s a no-brainer. Scans based on the explainable take the cake.

“You’re not trying to improve the accuracy of the machine running the technology,” Gunning said, of XAI. “The numbers of false alarms are not changed. … But you can improve the accuracy of the explanation of the false alarms.”

That, in turn, leads to better decisions.

Moreover, achieving this improved decision process in a way that keeps humans in charge, rather than replaces them with machines, is the brass ring all in the A.I. world should be grabbing. Why send a machine to do a human’s job? XAI, much more than general A.I., keeps the roles right: Humans superior, machines subservient. That’s technology we can all cheer.

• Cheryl Chumley can be reached at [email protected] or on Twitter, @ckchumley.


Copyright © 2018 The Washington Times, LLC. Click here for reprint permission.

The Washington Times Comment Policy

The Washington Times welcomes your comments on Spot.im, our third-party provider. Please read our Comment Policy before commenting.

 

Click to Read More and View Comments

Click to Hide