- The Washington Times - Monday, August 2, 2004

The release of the September 11 commission’s report, the earlierSenate Intelligence Committee’s highly critical assessment of the intelligence community’s performance prior to the Iraq war, and the Bush administration’s imperatives to act with an election near at hand strongly point toward change in the CIA and the intelligence community overall. The September11commission’s major recommendations are structural and include installing a national intelligence director and staff (which would control budgets and personnel) and creating a National Counterterrorism Center.

But changes that move the organizational boxes do little to address a major analytical shortcoming: the reliance on “inherited assumptions.” In a speech to CIA employees last February, CIA Deputy Director for Intelligence Jami Miscik termed this failure to re-test assumptions and conclusions reached in the run-up to the Iraq war the “single most important aspect of our tradecraft that needs to be examined.”

Assumptions drive analysis. Analysts and policy-makers alike assumed that Iraq, despite U.N. restrictions and inspections, maintained a minimal weapons of mass destruction program and that after U.N. inspectors left in late 1998, it would renew efforts to upgrade its WMD capabilities. This failure to challenge a basic assumption, as Ms. Miscik noted, also suggests that alternative scenarios (which turned out to be closer to the truth) were not seriously examined.

Why are faulty judgments made? Faced with a daily flood of diverse information, analysts construct a “reality,” a so-called mental model, rooted in basic assumptions, of what the analyst believes is happening. But simplified strategies to process and evaluate information also leads to mental errors, or cognitive biases. As Richard Heurer states in his recent book, “The Psychology of Intelligence Analysis,” cognitive biases “are particularly pertinent to intelligence analysis because they affect the evaluation of evidence, perception of cause and effect, estimation of probabilities, and retrospective evaluation of intelligence reports.”

Analysts work in a minefield of conflicting information. Does the imagery showing trucks at an identified but supposedlyclosed bioweapons facility mean that renewed production is imminent? Or are the trucks those of looters motivated by profit? Is the human source, who may have a serious drinking problem, telling the truth? Or is the source merely telling us what he believes we are looking for? Or is the source controlled by a group with a particular policy agenda and thus pedaling disinformation?

Can intelligence analysis be improved? And how? It can, but analysts and managers alike need first to recognize the limits of the mind — the cognitive pitfalls — to evaluate the many variables present in complex intelligence issues. More information is seldom the answer, though certainly actionable intelligence from reliable sources is golden. But analysts seldom have such high-value information to predict with certitude the course of events.

Is more expertise the answer? More analysts with extensive substantive, area and linguistic backgrounds are essential. But unfortunately analysts with many years on their particular intelligence account often fall prey to a “seen that, heard that” mindset that dulls their ability to evaluate nonconforming intelligence pointing toward alternate conclusions.

Improved analytic performance rests on three elements. First, periodic training is essential to improving analysis. Most valuable is training on how our mental processes work, of how we process information to make judgments. Case studies of past intelligence failures provide invaluable lessons. So are hands-on exercises that demonstrate the usefulness of simple analytic tools to help overcome our cognitive limitations.

Hard-pressed managers are loath to lose analysts; hard-charging analysts dislike leaving their accounts. Nonetheless, training is vital, and managers need to shed old mindsets that training is only for new hires and those in need of help.

A second factor is overcoming the tyranny of current reporting and briefings. Analysts require on-the-job time to meet with fellow analysts to share insights and debate opposing views, to re-test assumptions and alternative outcomes, and to gain the perspectives of outside experts. Managers need to encourage and find ways to allow these interactions to take place.

Finally, intelligence is useless unless acted upon by the policy-maker. With a budget of roughly $40 billion, the products of the intelligence community need to be taken seriously. Yet, policy-makers reach decisions based not only on what the intelligence community provides, but also from their advisers, official or unofficial, and other sources. Some officials try to be their own analyst, or to rely on a small staff, or to place too much weight on the views of foreign leaders they know.

Reorganization of the intelligence community may provide the political benefits of seemingly addressing the problem. But in the end, intelligence successes and failures depend on analysts and their work. Although mental pitfalls will continue to bedevil intelligence analysis, training in how to recognize these limitations and in the use of procedures to compensate for them can improve the intelligence product.

Intelligence failures cannot be eliminated. Intelligence analysts are neither seers nor sorcerers, and some ambiguity is unavoidable in most intelligence assessments. Intelligenceproducersare obligated to provide policy- makers their best judgments — including the knowns, unknowns, and what is ambiguous. But it is equally imperativethatintelligence consumers use intelligence with full knowledge of its inherent limitations.

Philip A. True is a former CIA analyst and manager who directed interagency training courses dealing with warning concepts and analysis.

LOAD COMMENTS ()

 

Click to Read More

Click to Hide