Continued from page 1

“I think what they wrote about terrorism reflects the fact that people knew the threat of non-state actors was growing in the 1990s,” said Mr. Horowitz, who co-authored a Foreign Policy article criticizing the shortcomings of the Global Trends reports. “That was a clear trend noticed by people inside and outside the intelligence community. But there wasn’t enough imagination to see that something like 9-11 could happen. What they are doing with these reports is extremely hard.”

Ironically, the National Intelligence Council’s signature forecasting product — National Intelligence Estimates, which represent the intelligence community’s best collective judgement and are roughly akin to the Global Trends reports, albeit focused on national security issues — was born from a similar predictive failure, a series of miscalculations involving China and North Korea that preceded the Korean War.

Did the establishment of NIEs lead to better predictive judgements? Not necessarily. A 1962 document erroneously concluded that the Soviet Union would not put offensive weapons in Cuba; a 1964 report mistakenly stated that Israel had “not yet decided” to build nuclear weapons; a 2002 NIE estimate of Iraqi weapons of mass destruction proved erroneous.

Of course, the above missteps join a long, undistinguished line of confident, informed, forward-looking analysis that later ran aground on the rocky shores of unexpected reality, including: Federal Reserve Chairman Ben Bernanke’s 2005 assurance that rising housing prices were the result of “strong economic fundamentals,” the book “Dow 36,000,” and competing models among dovish and hawkish American policymakers of the future of the Soviet Union — none of which foresaw its demise.

In fact, it was the bipartisan failure to predict the relatively sudden dissolution of the Soviet empire under Mikhail Gorbachev and the end of the Cold War that prompted Mr. Tetlock to begin studying a difficult, mostly unexamined question: What, if anything, distinguishes political analysts who are more accurate with their predictions on particular issues from those who are less accurate?

Moreover, can those political analysts perform appreciably better than chance?

Foxes and hedgehogs

In 2006, Mr. Tetlock published his answers in “Expert Political Judgment: How Good Is It? How Can We Know?” The book — which includes a 20-year study of 284 experts from a variety of fields making roughly 28,000 predictions about the future — was both revelatory and much-discussed, finding that political analysts:

• Are less accurate than simple extrapolation algorithms;

• Are only slightly more accurate than chance;

• Become significantly less accurate — less likely to better the dart-throwing monkey — when their predictions project more than one year into the future;

• Are overconfident, believing they know much more about the future than they actually do — for example, when they reported themselves as 80 or 90 percent confident about a particular prediction, they often were correct only 60 or 70 percent of the time;

• Are strongly disinclined to change their minds even after being proven wrong, preferring instead to justify their failed predictions or shoehorn them into their cognitive biases and preferred ways of thinking about and understanding the world.

Mr. Tetlock divided forecasters into two types of thinking styles: hedgehogs, who are deeply knowledgable about and devoted to a particular subject or body of knowledge; and foxes, who have eclectic interests and know a little about a lot of things.

Fox-style thinkers, he discovered, were more successful at predicting than hedgehogs — a counterintuitive finding that cuts against the whole notion of expertise.

Story Continues →