- The Washington Times - Friday, January 13, 2006

Years ago I read about attempts to determine states of mind by analyzing brain waves. The idea was that when you stepped through a security gate at an airport, sensors would invisibly probe to find out if you were under great internal stress and therefore, perhaps, a terrorist.

It didn’t work then, and it seemed a bit on the creepy side, but I figured that sooner or later someone would be able to make it work. It appears someone has.

The foundation technology is called MRI, Magnetic Resonance Imaging. Now there is fMRI, for “functional MRI,” which is a real-time version that allows you to see which parts of the brain are active at a given moment.

Researchers say that when people are lying, different parts of their brains “light up.” Specifically, say published reports, the amygdala, rostral cingulate, caudate and thalamus.

The technology is already being commercialized, as for example by No Lie MRI (www.noliemri.com) and Cephos (www.cephoscorp.com). According to an article in Wired.com, No Lie MRI will debut its services this July in Philadelphia, where it will demonstrate the technology to be used in a planned network of facilities the company is calling VeraCenters.

Each facility will house a scanner connected to a central computer in California. As the client responds to questions using a handheld device, the imaging data will be fed to the computer, which will classify each answer as truthful or deceptive.

The technology is new and not everyone is convinced that it will work as advertised. Are there classes of people who can fool it, as for example psychopaths?

If it only works most of the time, then presumably it will be like today’s polygraphs, evidence seldom accepted in courts.

But suppose that, as may be the case, it proves possible to determine with fingerprint certainty whether someone is lying. This is not now the case, but the technique is new. The implications are far from trivial.

For example, if one could ask, “O.J., did you do it?” with perfect confidence that the truthfulness of the response might be known with unerring technological certitude, then why have trials?

In the case of the courts, it would seem hard to argue against the idea. A sure answer from a machine seems preferable to a jury’s best guess. And in the case of screening people for security clearances, no objection seems reasonable.

But the result would be to make it impossible to lie to government. Depending on which government the consequences could be ugly. (“Do you truly love comrade Stalin?”) In principle, as the technology got faster, better, and cheaper, which is what technologies do, police could use it for screening. (“Have you ever smoked marijuana? Step into cell six.”)

Truthfulness is usually regarded as a virtue. How serious are we about it? (“Have you ever cheated on your taxes?”) Do we want a society in which you can never safely lie about anything? Cephos makes the point on its Web site that you can refuse to submit to fMRI, but, if the technology becomes close to infallible, that would be regarded as admission of guilt.

If it is as successful as its proponents would like it to be, either we will come up with some tight controls over its use, or it will be another of the galloping array of technological surveillance that increasingly intrude on us.



Click to Read More

Click to Hide