What Library Catalogues Can Tell Us about 9/11

advertisement
INTELLIGENCE ORGANIZATIONS AND THE ORGANIZATION OF INTELLIGENCE:
WHAT LIBRARY CATALOGUES CAN TELL US ABOUT 9/11
Thomas H. Hammond
Department of Political Science
303 South Kedzie Hall
Michigan State University
East Lansing, Michigan 48824
(517)-353-3282 (office)
(517)-432-1091 (fax)
thammond@msu.edu
Kyle I. Jen
House Fiscal Agency
State of Michigan
PO Box 30014
Lansing, Michigan 48909-7514
(517)-373-5015
(517)-373-5874
kjen@house.mi.gov
Ko Maeda
Department of Political Science
303 South Kedzie Hall
Michigan State University
East Lansing, Michigan 48824
(517)-432-0979 (office)
(517)-432-1091 (fax)
maedako@msu.edu
DRAFT OF MAY 13, 2003
Prepared for presentation at the Conference on Innovation, Institutions and Public Policy in a
Global Context, sponsored by the Structure and Organization of Government Research Committee
of the International Political Science Association and the Elliott School of International Affairs of
George Washington University in Washington DC on May 22, 23, and 24th, 2003. An earlier
version was presented at the annual meeting of the Midwest Political Science Association,
Chicago, Illinois, April 3-6, 2003.
2
***************************************************************
***************************************************************
“A great part of the information obtained in war is contradictory, a still greater part is false, and by far the greatest
part is somewhat doubtful. What is required of an officer in this case is a certain power of discrimination, which
only knowledge of men and things and good judgment can give.”
 Carl von Clausewitz, On War, Book I, Chapter VI (1832). Quoted in George S. Pettee, The Future of
American Secret Intelligence (1946, 48)
***
“The sole reason why any organization for intelligence work may be needed is that a volume of mental operations in
time is required, which is beyond the performance of any single mind. Its performance then requires the organized
activities of a number of workers.”
 George S. Pettee, The Future of American Secret Intelligence (1946, 68)
***
“It should be borne in mind…that the intelligence activity consists basically of two sorts of operation. I have called
them the surveillance operation, by which I mean the many ways by which the contemporary world is put under
close and systematic observation, and the research operation. By the latter I mean the attempts to establish
meaningful patterns out of what was observed in the past and attempts to get meaning out of what appears to be
going on now.”
 Sherman Kent, Strategic Intelligence for American World Policy (1951, 4)
***
“The management of the modern office is based upon written documents (the “files”), which are preserved in their
original or draft form, and upon a staff of subaltern officials and scribes of all sorts….The decisive reason for the
advance of bureaucratic organization has always been its purely technical superiority over any other form of
organization. The fully developed bureaucratic apparatus compares with other organizations exactly as does the
machine with the non-mechanical model of production. Precision, speed, unambiguity, knowledge of the files,
continuity, discretion, unity, strict subordination, reduction of friction and of material and personal costs – these are
raised to the optimum point in the strictly bureaucratic administration, and especially in its monocratic
form….Increasingly, all order in public and private organizations is dependent on the system of files and the
discipline of officialdom…”
 Max Weber, “Bureaucracy” in Economy and Society (1968, 957, 973, 988, emphasis added)
***
“Max Weber can provide greater insights into intelligence organizations than Ian Fleming.”
 Loch K. Johnson, Secret Agencies: U.S. Intelligence in a Hostile World (1996, 57)
***************************************************************
***************************************************************
3
I. INTRODUCTION
A disquieting aspect of many successful surprise attacks in modern warfare is that the defender discovered, after the attack, that it had already possessed, before the attack, a substantial amount of information
suggesting that an attack was on the way. Among such “intelligence failures” by the defenders are the
German invasions of France and Norway in 1940 and the Soviet Union in 1941, the Japanese navy’s attack on the American fleet at Pearl Harbor in 1941, the German attack on British and American forces in
the Ardenne in 1944 in the “Battle of the Bulge,” the Tet Offensive by the North Vietnamese and Viet
Cong against American and South Vietnamese forces in 1968, and the Egyptian attack on Israeli forces in
the Sinai in 1973. In each case, it was later revealed that essential information had already been collected
by the defenders’ intelligence agencies but that the data had been ignored or interpreted in ways which
limited, or even completely negated, an effective response; had the information been properly assessed
and disseminated, it was often argued, the defender should have been able to anticipate, disrupt, and perhaps even prevent these attacks. So prevalent have been these intelligence failures in the face of impending surprise attack that some analysts have even concluded that such failures are simply to be expected; as
Betts (1978, 88) put it, “Intelligence failures are not only inevitable, they are natural.”
Nonetheless, other analysts have argued that intelligence failures are not nearly so inevitable and that
surprise attacks are not always so successful. For example, Levite (1987, ch.3) cites the surprise attack
which the Japan navy attempted on Midway Island in 1942, an attack intended to lure the U.S. fleet into a
decisive open-seas battle which the Japanese expected to win; however, the Japanese attack was decisively defeated by the U.S. navy’s own surprise counter-attack (which was made possible by deciphering
some of the Japanese navy’s communications codes). The surprise British-Canadian amphibious assault
on Dieppe in German-occupied France in 1942 was also a disastrous failure, at least for the participants
(though the attack did serve, as originally intended, to draw some German resources from the Russian
front). And Operation Market Garden in 1944, the ambitious Allied effort to seize bridges over the water
barriers in Holland, including the Rhine, so as to turn the northern end of the Siegfried Line defending the
German border and thereby open the way across the North German plains (see Kirkpatrick 1969, ch.5,
4
esp. p.208), was a surprise attack which failed for going “one bridge too far.” Moreover, some surprise
attacks which were initially successful were turned back relatively quickly; for example, while the German surprise attack on Allied forces in the Battle of the Bulge was momentarily successful, it never came
close to achieving its objective (seizure of the critical Allied port at Antwerp), and so it primarily served
to drain the Germany army of the last of its reserves: within five months, Germany was forced to surrender to the American and Soviet armies. For other surprise attacks it took much longer to reverse the initial gains, but even here the attackers never achieved their goals. For example, the German surprise attack
on the Soviet Union in 1941 was intended to reduce it to submission, but since Germany did not achieve
that goal, the attack primarily served to bring an immensely formidable foe into the war against her. Similarly, the Japanese surprise attack on Pearl Harbor in 1941was intended, at least in part, to induce the
United States to come to an accommodation with Japanese war aims in China and Southeast Asia, but
since Japan did not achieve that goal, the attack served primarily to bring into the war the foe which ultimately defeated her.1
This debate regarding the “inevitability” of intelligence failures and the efficacy of surprise attacks
in modern warfare is now being replicated with regard to terrorist attacks on the U.S. On February 26,
1993, terrorists launched an attack on one of the World Trade Center towers in New York; this attack was
only partially successful in that it did damage foundation pillars (and kill six individuals) but it did not
bring down the building as intended. Another 1993 plot that was apparently intended to produce a subsequent “Day of Terror” in New York – to blow up the Lincoln and Holland tunnels connecting New York
and New Jersey, the George Washington bridge, the U.N., and the Federal Bureau of Investigation (FBI)
It is odd that the literature concluding that surprise attacks will inevitably succeed, due to the defenders’ intelligence failures (see especially Betts 1978, 1980-81, 1982), rarely mention anything about why the attacking forces
can necessarily be expected to have adequate intelligence. Note that the three surprise attacks mentioned above
which were failures – Dieppe, Midway, Market Garden – all involved attackers who lacked adequate intelligence
about the defenders. For example, Kirkpatrick (1969, 51) reports that General Heinz Guderian, the leading proponent of armored warfare in the Germany Army, had published a book in 1937, Achtung Panzer!, which put Russian
tank strength at 10,000, “a figure he altered from intelligence reports of that time, which gave the strength at
17,000.” Kirkpatrick then quotes Hitler as saying to Guderian, six weeks after the attack on Russia, “If I had known
that the figures for Russian tank strength which you gave in your book were in fact the true ones, I would not – I
believe – ever have started this war.” (The quotation which Kirkpatrick cites was originally from Heinz Guderian,
Panzer Leader (1952)). In other words, both attackers and defenders can experience intelligence failures, and aside
from the fact that the attacker has the initiative, it is not clear why attackers should necessarily be considered less
1
5
office in lower Manhattan (see White 2002, 6) – was stymied by American intelligence and law enforcement forces. The so-called “Millenium” terrorist attacks on the U.S. were likewise all stymied in late
1999 and early 2000. Nonetheless, on September 11, 2001, terrorists using hijacked airliners were able to
launch successful surprise attacks on the Pentagon and both towers of the World Trade Center, killing
over 3,000 individuals, severely damaging a section of the Pentagon, and completely destroying both
World Trade Center towers in the ensuing fires.
In other words, there have been intelligence successes and intelligence failures involving surprise attacks by terrorists in the U.S., and as was the case with the “classic” surprise attacks mentioned previously, it has been discovered that U.S. national security agencies already had in their possession information
which, had it been properly assessed and disseminated, might have allowed these agencies to disrupt, limit, or perhaps even prevent the attacks on 9/11. The ensuing debate has focused on two key questions, one
involving the past and the other involving the future. The key retrospective question, of course, is what
accounts for the intelligence failures which seem to have occurred on 9/11?; that is, why were these attacks not detected and prevented? The key prospective question involves how to design a national intelligence community which can avoid such disasters in the future; that is, is there an alternative design which
can yield a higher probability of detecting and preventing terrorist attacks? In general, then, participants
in this debate want to determine the characteristics of the most successful intelligence community.
There is a huge literature on intelligence organizations and their role in national-security decisionmaking processes,2 and this literature has generated a large number of hypotheses about the causes of intelligence failures. Levite (1987, 9-12) provides a useful inventory. He begins with a list of the most
prominent general explanations. First, there are individual failures in correctly assessing intelligence information, failures which are usually explained via reference to various kinds of theories of individual
psychology (e.g., involving perception and cognition). Second, there are intelligence failures which stem
from the interactions of humans in small groups; these explanations are usually described in terms of the-
prone to these failures than defenders.
2
For a website containing a large on-line bibliography of literature on a wide range of intelligence-related topics,
compiled by J. Ransom Clark of Muskingum College in Ohio, see http://intellit.muskingum.edu/index.html.
6
ories from social psychology. Third, intelligence failures are sometimes described as deriving from the
pathologies of complex bureaucratic organizations. Fourth, intelligence failures are sometimes ascribed
to various kinds of bureaucratic politics involving political interactions among the defenders’ military and
intelligence organizations. Fifth, intelligence failures are sometimes described as stemming from “cybernetic” processes involving limitations on learning and information processing by individuals and organizations.
Levite (1987, 13-18) then surveys explanations for intelligence failures which are somewhat more
unique to the problems of intelligence agencies and warnings about surprise attack. 3 First, those who are
planning a surprise attack may simply have a structural advantage: they have the initiative, and the best
the defender can often do is simply to react. Second, Wohlstetter (1962) argued that intelligence analysis
almost always involves the extraction of meaningful information from a mass of meaningless noise; the
resulting assessments are almost inevitably uncertain and error-prone. Third, the countries planning an
attack often effectively conceal their intentions, sometimes by constructing elaborate deceptions. Fourth,
the attacker may simply be indecisive or change his mind several times, and if the attacker himself does
not know what he is going to do, it will be even more difficult for the defender to know how to respond
appropriately. Fifth, if the defender’s intelligence analysts are too closely linked to policymakers, the
analysts may provide “intelligence to please” (Levite 1987, 17), as opposed to unbiased reports, whereas
if the intelligence analysts are too isolated from policymakers, the analysts’ information may be less useful (e.g., it may address the wrong questions or concerns) and so the policymakers will remain ignorant;
in either case, the policymakers’ decisions will be less than adequately informed. Sixth, intelligence information may be “compartmented” due to security requirements (to protect the collecting agencies’
sources and methods of collection), but this may prevent the full dissemination of the relevant information
to policymakers and thus limit their understanding of the implications of the information. Seventh, if the
defender responds to intelligence warnings before the surprise attack (in other words, the attack is no
longer a “surprise”), the would-be attacker may quietly cancel his plans for the attack, and so the defend-
3
The sequence of Levite’s explanations presented below has been modified from the original.
7
er’s intelligence officials may be accused of having raised a false alarm. This leads to the eighth and final
explanation which Levite cites, which is that there may be a “cry ‘wolf’” syndrome – repeated warnings
by the intelligence agency that turn out to be incorrect – which reduces the policymaker’s inclination to
heed the agency’s warnings; indeed, Levite (1987, 17) suggests that “Students of strategic surprise have
uncovered some evidence for the existence of the ‘cry ‘wolf’ syndrome’ in almost every historical instance of surprise.”
Accompanying this plethora of explanations for intelligence failures is an equally broad range of
prescriptions for reform. However, in the ongoing debate over 9/11, many of these prescriptions can be
grouped into three broad schools of thought. Perhaps the predominant point of view over the 60+ years
since Pearl Harbor is that the institutional structure of the intelligence community has a major impact on
how well that community performs its function of warning about surprise attack. Recall that the disaster
at Pearl Harbor was initially seen as involving several different but inter-related structural failures, such
as the lack of a unified intelligence community for collecting critical information about enemy capabilities and intentions, the lack of a trained corps of intelligence analysts who could interpret the information
that was collected, and the lack of a unified military command structure which systematically disseminated to policymakers whatever information was collected and whatever analyses were produced. As a result, the first post-war institutional reforms included the creation of a unified Central Intelligence Agency
(CIA) headed by a Director of Central Intelligence (DCI), a unified Department of Defense (DOD) headed by a civilian Secretary of Defense (SecDef), a unified Joint Chiefs of Staff (JCS), and a National Security Council for helping the president utilize the information and advice produced by these and other national security institutions; see Zegart (2003) for a recent historical account of the creation of the CIA,
JCS, and NSC. As Sherman Kent, a Yale historian who served in the Office of Special Services (OSS)
during World War II, and who subsequently wrote one of the earliest treatises on intelligence, Strategic
Intelligence for American World Policy (1951), observed, “the intelligence of grand strategy and national
security is not produced spontaneously as a result of the normal processes of government; it is produced
through complicated machinery and intense purposeful effort” (p.78).
8
However, these early responses to what were perceived as structural failures have rarely been seen as
adequate, and prescriptions for reform have been repeatedly advanced since then. Indeed, the managerial
authority of the Secretary of Defense has been steadily augmented, beginning as early as 1953 (see Hammond 1961); the last major reform, affecting both the Secretary of Defense and the Joints Chiefs of Staff,
was the Goldwater-Nichols Act of 1987 (PL 99-433). Regarding the intelligence community, there have
also been an ongoing series of reforms, affecting agencies such as the CIA, the National Security Agency
(NSA), which collects what is called “signals intelligence,” and the institutions – the National Reconnaissance Office (NRO) and the National Imagery and Mapping Office (NIMA) – which are involved in the
production of what is now called “geospatial” intelligence (e.g., photographic pictures from intelligence
satellites); see Berkowitz and Goodman (1989), Johnson (1996), Treverton (2001), and Lowenthal (2003)
for descriptions and evaluations. Symptomatic of the unsettled state of the structure of the intelligence
community is the fact that since the mid-1990’s alone there have been at least fourteen major governmental studies which have some kind of bearing on structural issues involving the intelligence community
(see Hill 2002—October 3, 2002, p.3).4
What allows these debates to continue after so many decades, in our view, is the simple fact that
most of the key structural issues remain unresolved, both theoretically and empirically. For example,
some observers argue for a substantial centralization of the intelligence community, under the managerial
and budgetary control of a Director of National Intelligence (whose powers would far exceed those of the
current DCI), and they recite historical cases – Pearl Harbor, of course, figures prominently among them
– which seem to illustrate the hazards of decentralization (or “fragmentation,” the more pejorative term
4
See 1995-1996: Commission on the Roles and Capabilities of the U.S. Intelligence Community (Aspin-Brown
Commission); 1996: IC21—The Intelligence Community in the 21st Century (House Permanent Select Committee on
Intelligence Staff Study); 1997: Modernizing Intelligence: Structure and Change for the 21 st Century (Odom Study);
1998: Intelligence Community Performance on the Indian Nuclear Test (Jeremiah Report); 1999: The Rumsfeld
Commission on the Ballistic Missile Threat; 2000: Countering the Changing Threat of International Terrorism, a
Report from the National Commission on Terrorism (Bremer Commission); 2000: Report of the National Commission for the Review of the National Reconnaissance Office; 2000: National Imagery and Mapping Agency Commission Report; 2001: Road Map for National Security: Imperative for Change, the Phase III Report of the U.S. Commission on National Security/21st Century (Hart-Rudman Commission); 2001: The Advisory Panel to Assess Domestic Response Capabilities to Terrorism Involving Weapons of Mass Destruction (Gilmore Commission, Third Annual Report); 2001: Deutch Commission on Weapons of Mass Destruction; 2002: A Review of Federal Bureau of Investigation Security Programs (Webster Commission); 2002: House Permanent Select Committee on Intelligence
9
often used by the critics of decentralization).
However, other observers have long argued that a decentralized intelligence community, characterized by at least some degree of competition and redundancy, has numerous virtues, and they recite historical cases which seem to illustrate the hazards of centralization. For instance, Kent (1951, 92) observed
that “People who shout duplication at the first sign of similarity in two functions and who try to freeze
one of them out on the ground of extravagance often cost the government dearly in the long run.”
These arguments for decentralization have themselves come under attack at various times. One sustained critique comes from Betts (1982), who argues that:
The most common recommendation of critics in the 1970s was to encourage pluralism in the collection and evaluation of intelligence to prevent dependence of decisionmakers on a small number of sources for information and advice. But pluralism, although beneficial, is no panacea.
Multiplying assessments within a government cannot with certainty neutralize misperceptions due
to ethnocentrism or cognitive predispositions. And while competition in assessment reduces the
dangers that important possibilities will be overlooked or suppressed, it increases the odds that
correct evaluations will take longer to work their way through the system; delay varies directly
with organizational complexity and debate. And competition without a central mechanism to discipline the process can be dangerous. (p.288)
Betts goes on to say that
inadequacies in warning are rarely due to the absence of anyone in the system ringing an alarm.
Usually the problem is either that a conceptual consensus rejects the alarm for political or strategic reasons outside the framework of military indicators or that false alarms dull the impact at the
real moment of crisis. The first problem cannot be averted by organizational change, because it is
an intellectual or cultural phenomenon that transcends differences in structure and process. The
second cannot be prevented because it occurs at the enemy’s pleasure. (pp.288-289)
A second structural issue involves how “close” the intelligence agencies should be to key policymakers. For example, some observers argue the producers of intelligence-related information are often
too “distant” from policymakers, in the sense that what policymakers need for ongoing decision-making
is not what their intelligence agencies have to offer; from this perspective, a closer integration of information, advice, and policy choice is necessary. However, other observers argue, in effect, that familiarity
breeds contempt: if the producers of intelligence-related information are too close to the policymakers,
the intelligence chiefs will be induced to provide the policymakers with information and advice which
supports what the policymakers want to do anyway; from this perspective, the job of the intelligence
Subcommittee on Terrorism Study; The Scowcroft Commission (Report not yet released as of October 3, 2002).
10
agencies is to provide policymakers with what they need to know rather than what they merely want to
know, and to guarantee that this “independent” and “unbiased” (even “unvarnished”) advice is produced,
considerable distance between the intelligence agencies and the policymakers should be maintained.5
Further complicating these discussions is the fact that some analysts make purely structural arguments, while others attempt to address not only the basic structural issues but also what they see as the
equally-important incentive issues which arise within any particular kind of structure. For example, while
some proponents of centralization focus solely on the structural issues involved in assembling a disparate
body of data into a compelling story of what some international adversary is most likely to do, critics of
centralization raise questions about the incentive effects which a single all-powerful intelligence chief
might have on the intelligence community (e.g., would everyone have to follow “the company line”?).
Similarly, some proponents of decentralization focus on the virtues of competition and redundancy within
the intelligence community (which in their view provide a positive incentive for the agencies to compete
in producing the most useful and high-quality information and analyses for policymakers), while critics of
decentralization question whether agencies which compete will have sufficient incentive to share crucial
information with each other: they emphasize that “knowledge is power” in and around the intelligence
community, so if an agency gives up its secret information, it thereby reduces its own power.6 Indeed,
dysfunctional inter-agency hostilities, which can greatly impede inter-agency cooperation and coordination, may arise in a competitive environment; see, e.g., Riebling (1994) for a description of the long history of conflicts between the FBI and CIA over intelligence, counter-intelligence, and law-enforcement activities.
A second general school of thought is closely related to the structural (or structure-and-incentive) ar-
The American “solution” to this tradeoff is to maintain substantial distance between the analysts and the policymakers; the British “solution” is just the reverse, to maintain close, ongoing ties between analysts and policymakers.
6
There are other incentive issues which we will not further discuss here, though they are not entirely unrelated to the
structure-and-incentive arguments just discussed. These issues involve questions of “politicization” of the intelligence community, especially the CIA. For example, Gertz (2002) blames the intelligence failures of 9/11 on various
kinds of incapacities of the intelligence community which he asserts were caused by budget cuts, manpower reductions, and restrictions on covert activities imposed by the Clinton Administration and its Democratic supporters in
Congress; in his view, these cuts, reductions, and restrictions were motivated by an ideological hostility to the intelligence community. These limitations, Gertz argues, left the country dangerously exposed to terrorist attack at home
and abroad. See also Baer (2002) for a critique of the politics in and around the CIA, based on his own first-hand
5
11
guments but stresses the tradeoffs any particular structural choice necessarily involves or the contingent
nature of an effective structure. For example, in his often-cited article, “Analysis, War, and Decision:
Why Intelligence Failures Are Inevitable,” Betts (1978, 84-85) argues that “Organizational solutions to
intelligence failure are hampered by three basic problems,” the first of which is that “most procedural reforms that address specific pathologies introduce or accent other pathologies.” And while Codevilla
(1980, 12-13) criticizes the arguments in Betts (1978), his own argument is not entirely unrelated:
Wise legislators always have tried to structure their governments in ways that would work against
the peculiar defects to which they appeared prone at the time and to act in ways that would fill the
present circumstances’ peculiar requirements. Such legislators usually have been aware that today’s dysfunctional habits may disappear, or indeed become tomorrow’s indispensable ones, and
that different circumstances may require entirely different kinds of performance.
This might be referred to as a “contingency theory” of organization design for the intelligence community, and it has deep roots (though Codevilla does not cite them) in the literature of organization theory; see
Donaldson (2001) for a recent review.
Finally, a third school of thought downplays the impact of structure in various ways and highlights
the importance of other factors, such as the importance of the motivations and quality of the analysts
(numbers, training, support) and the receptivity of policymakers to information and advice from the intelligence community. For example, Betts states that
Intelligence failure is political and psychological more often than organizational….Intelligence
can be improved marginally, but not radically, by altering the analytic system….The use of intelligence depends less on the bureaucracy than on the intellects and inclinations of the authorities
above it. (p.61)
Similarly, in his subsequent book, Surprise Attack (1982), Betts states that “The principal cause of surprise is not the failure of intelligence but the unwillingness of political leaders to believe intelligence or to
react to it with sufficient dispatch” (p.4). Betts then goes on to observe that (1982, 17)
the fixation on intelligence channels efforts into a search for organizational and technical solutions and diverts attention from other aspects of the problem. Four decades ago faulty organization and surveillance capabilities indeed were major vulnerabilities, and the trauma of Pearl Harbor energized important reforms in U.S. intelligence. But the number of problems of process that
can be fixed by structural change is limited because subjective factors always intrude….The
United States has reached the point of diminishing returns from organizational solutions to intelligence problems.
experiences as a CIA agent.
12
Codevilla (1980, 35) argued that “almost any system for analyzing intelligence can work well if everyone
connected with it believes his own life depends on its success.” Pforzheimer (1980, 40), in discussing the
pre-WWII state of intelligence organization in Britain, states that “you must get down to the fundamental
problem that intelligence is people and personalities more than it is organization.” He went on to suggest
that, “in the end it has to be people, and you have to rely on people, whether they are in this box or that
box, to produce what is ultimately needed in the future” (1980, 42). Handel (1980, 85) likewise appears
to downplay the role of organization when he writes that,
My conclusion is that most intelligence failures in nonmilitary affairs are the result of causes similar to those in military intelligence. The major causes of all types of surprise are rigid concepts
and closed perceptions. These compound the effects of noise, which make intelligence work
more difficult.
This paper will not attempt to resolve all of these issues: that would be impossible with the present
state of anyone’s knowledge. However, we will focus on what we think is one critical aspect of all three
of the major points of view reviewed above: there is reason to think that how intelligence agencies store
and catalog the information they gather can have a profound impact on how – or even whether – individual analysts later access the information and assess its meaning. How intelligence agencies store and catalog this information will be influenced by at least three factors. First, how information is stored and catalogued will be in part a function of how the intelligence community in general is formally structured.
Second, how information is stored and catalogued will be in part a function of how individual intelligence
agencies are structured. Third, how information is stored and catalogued will be in part a function of each
agency’s formal cataloguing system. For each of these three factors, one design can be expected to lead
to the storage of information in ways which are different from how another design would store the information, and how the information is stored can be expected to affect how – and even whether – the information is later successfully retrieved and assessed.
It would be very difficult to empirically demonstrate some of the arguments we wish to advance:
gaining the needed security clearances would be only one of the many insurmountable problems. Hence,
we turn to a very different, but perhaps still instructive, alternative: library catalogs. In previous work
13
(Hammond 1993), we observed that a library catalog is a hierarchically-structured means for classifying
and storing data about the library’s holdings, and we conjectured that if two libraries each used a different
kind of cataloging system to classify and store the same set of books, the inferences drawn from the use of
the libraries would be different. In this paper we empirically test this conjecture with data from two university libraries with two different cataloguing systems: the Library of Congress cataloging system at
Michigan State University and the Dewey Decimal cataloging system at Northwestern University. Our
results provide dramatic empirical support for the conjecture: the two libraries classify and store the same
set of books in surprisingly different ways, and what books appear “near” other books can be expected to
have a profound effect on what users who are trying to make sense of some field of intellectual inquiry
are actually able to learn. Hence, we conclude that how national-security data are stored and catalogued
can have a major impact on what – and even whether – the proper inferences are drawn about the likelihood of surprise attack by terrorists.
We must acknowledge, though, that our analysis does not take into account the incentive effects of
any particular structure, nor do we consider the quality of the personnel within any one structure; those
important issues will have to be left for consideration elsewhere. The argument about tradeoffs among
various kinds of structures is one for which we have considerable sympathy (see, e.g., Hammond and Miller 1985, Hammond 1986, and Hammond and Thomas 1989), and we will give some consideration (albeit
limited) to this issue.
II. ARGUMENTS ABOUT INFORMATION STRUCTURES IN INTELLIGENCE COMMUNITIES AND AGENCIES
We begin with a simple observation from Chan (1979, 175): “In the real world of strategic analysis, warning signals are usually scattered across individuals and bureaucratic units.” It is this basic fact that constitutes the core of the argument about the “intelligence failures” with which we began the paper: even if an
intelligence organization, or the intelligence community more generally, contains within it all the information it might need, in principle, to become aware of an impending surprise attack, this information
must be organized in some fashion – e.g., all brought together within some unit, even on someone’s desk
14
– so that the proper inferences can be made; in the language often used about 9/11, the information must
be organized so that someone can “connect the dots” into a coherent, meaningful picture. It was this perspective on what happened at Pearl Harbor – or, more accurately, what did not happen – that drove a substantial part of the postwar reform movement resulting in the creation of the CIA and the DCI. 7
But interestingly, while this issue is clearly a critical one, it is difficult to find any extensive analysis
in the literature on intelligence organizations which goes into much depth on the interactions between the
structures of intelligence organizations and the character of the resulting intelligence. In the introductory
literature on intelligence organizations – see, e.g., see Berkowitz and Goodman (1989), Johnson (1996),
Treverton (2001), and Lowenthal (2003) – there are references to “stovepipes” of information, which
stems from the fact that some of the intelligence community is organized partly on the basis of the method
of collection: e.g., the NSA specializes in the collection of signals intelligence, or SIGINT (monitoring
communications involving the country’s adversaries), the NIMA specializes in the collection of geospatial intelligence, and the CIA’s clandestine operations branch (its name has been changed periodically)
specializes in the collection of human intelligence, or HUMINT. And there are certainly discussions of
the problems of integration and dissemination of the resulting information, e.g., by the CIA and the Defense Intelligence Agency (DIA). But there is almost no abstract and theoretical treatment of this linkage
between the structures of intelligence organizations and the structure of the resulting intelligence.
A. INDEXING AND CATALOGUING
Symptomatic of this problem is the fact that most descriptions of the nature of the process by which information is gathered and used virtually ignores the problem of storage of the information. For example,
Treverton (2001, 106) refers to the “real” intelligence cycle in which (to simplify his analysis a bit), (1)
“Intelligence infers needs,” (2) “Tasking and collection” occur, (3) “Raw intelligence” is collected, (4)
“Processing and analysis” occur, (5) “Policy receives and reacts,” and the cycle starts all over again.
7
We should mention, though, that there remains some debate about the proper interpretation of what happened at
Pearl Harbor. For example, Levite (1987, ch.2) reviews the extensive literature on Pearl Harbor and concludes that
because “the United States possessed no positive warning on Japan’s intention to attack the United States in general,
and Pearl Harbor in particular, [this] is tantamount to saying that the Pearl Harbor surprise (as distinguished from
unpreparedness) was essentially a failure of collection, not of analysis” (p.82). In other words, it was what the U.S.
did not possess, rather than its use of what it did possess, that was the key problem.
15
Berkowitz and Goodman (1989, Appendix A) also refer to “the intelligence cycle” which includes “Step
I: Determining the Information Intelligence Consumers Require,” “Step II: Collection,” “Step III: Analysis and Coordination of Assessments Results,” and “Step IV: Dissemination of the Product.” Lowenthal
(2003, 51) posits a cycle consisting of “Requirements,” “Collection,” “Processing and Exploitation,”
“Analysis and Production,” “Dissemination,” and “Consumption,” and he even cites a 1993 publication
by the CIA, titled A Consumer’s Handbook to Intelligence (September 1993), which depicts a cycle consisting of “Planning and Direction,” “Collection,” “Processing and Exploitation,” “Analysis and Production,” and “Dissemination.” And Betts (1982, 88) suggests that:
After the basic decision about the targets and methods for information collection – perhaps the
most significant stage of intelligence management – the process of warning and response may be
conceived in terms of five phases: (1) data acquisition, (2) correlation and the intelligence professionals’ decision to warn, (3) communication to decisionmakers, (4) the authorities’ discussion,
assessment, and decision to respond, (5) the military commanders’ implementation of authorization to respond.
As should be apparent, however, there is virtually no mention of precisely what happens to the intelligence information after it has been collected but before it is assessed and analyzed.
This lacunae has not gone entirely unrecognized in the national security community. For example, in
the congressional hearings on 9/11, Deputy Secretary of Defense Paul Wolfowitz remarked that (19 September 2002, 6):
We also need to address a relatively new problem, what I’ll call “information discovery.” Many
agencies collect intelligence and lots of agencies analyze intelligence, but no one is responsible
for the “bridge” between collection and analysis. Who in the intelligence community is responsible for tagging, cataloguing, indexing, storing, retrieving, and correlating data or for facilitating
collaboration involving many different agencies? Given the volume of information that we must
sift through to separate signal from noise, this function is now critical. We cannot neglect it.
Moreover, there are bits and pieces of reference to libraries, indexing, collating, and cataloguing functions
in the historical intelligence literature. In his historical study, Captains without Eyes: Intelligence Failures in World War II, Kirkpatrick (1969, 12) discusses the British, French, Japanese, and American intelligence services and then observes that
What these nations had to forewarn them on national danger was not just one omniscient organization, but several different departments and agencies all collecting bits and pieces of information
which presumably was collated and analyzed before being presented to the top policy makers of
the government.
16
However, Kirkpatrick says nothing about what was involved when information was “collated.”
Similarly, in Hinsley et al. (1979), the first volume of the multi-volume series, British Intelligence
in the Second World War, reference is made in Chapter 1, titled “The Organisation of Intelligence at the
Outbreak of War,” to the fact that
Unlike the Service departments, the Foreign Office possessed no branch or section of its own that
was especially entrusted with intelligence. Attempts had been made from time to time to develop
its library and its research department in this direction, but – sometimes amalgamated and at others separated – those bodies had never become more than organizations for the storage, indexing
and retrieval of an increasingly voluminous archive of correspondence and memoranda because
the Foreign Office’s overriding interest was in the conduct of diplomacy. (p.5)
And in the volume’s Chapter 6, “The Mediterranean and the Middle East to November 1940,” it was observed of the Joint Intelligence Sub-Committee (JIC) of the Chiefs of Staff (COS) that
By the spring of 1940 it was issuing not only strategic appreciations but also background reports
on a great variety of subjects – frontiers, climate, resources, communications, hygiene – for more
than twenty countries, and it was indexing and collating information from more sources than any
British intelligence body had tried to do. It is clear, indeed, that it regarded information from any
source as grist to its mill – and that it established the right to receive everything – high-grade
Sigint [signals intelligence], field Sigint, PR [photographic reconnaissance] and other reconnaissance, SIS [Special Intelligence Service] reports, POW intelligence, as well as appreciations sent
out from Whitehall and by the many intelligence organizations in the Middle East, and reports
from British diplomatic, consular and colonial authorities. (pp.192-193)
The second volume in the series, Hinsley et al. (1981), also makes the following comments on the Government Code and Cypher School [GS and CS] in Chapter 15, “Developments in the Organisation of Intelligence”:
The intelligence branches accepted that GC and CS did not infringe their rights if, as well as deciding on interception and cryptanalytic priorities, it interpreted the Sigint material it produced
and disseminated the intelligence direct to overseas commands; and they recognised that, the better to perform these functions, no less than to carry out its cryptanalysis, GC and CS must be given intelligence from other sources and, under safeguards, even knowledge of future Allied operational intentions and the needs they were likely to generate. They recognized, indeed, that GC
and CS was on some matters better placed to interpret Sigint than they were themselves – that its
research methods and voluminous indexes, initially developed to help it to attack ciphers and to
elucidate and reconstruct the texts of decrypts, gave it unrivalled expertise on subjects like enemy
cover-names, proformas, technical terms and signals routines which not infrequently provided the
best or the sole clues to the operational significance of the enemy’s messages.”
Nonetheless, even these massive and insightful historical volumes never go into much detail about the
storage, indexing, collating, and retrieval activities on which the enormously effective British intelligence
17
efforts were all based. Hence, we are able to learn relatively little about the interactions among the information stored in the intelligence community as a whole (which agency had what information?), about
the information stored within one agency (which office in the agency had what information), and about
how the information was stored within any one office.
Perusal of the earliest academic literature on U.S. intelligence agencies also reveals some degree of
sensitivity to these activities involving storage, indexing, collating, and retrieval. For example, one of the
earliest postwar treatises on intelligence, George Pettee’s The Future of American Secret Intelligence
(1946), states that
the essential aspects of intelligence operations may be summarized, though only roughly, as follows:
(1)
(2)
(3)
(4)
The collection of raw information.
Classification, indexing, and reference.
Analysis and interpretation.
Combined interpretation.
He then conducts an analysis of the different ways in which raw information can be handled. One way is
simply to route the material to analysts
in accordance with a summary classification in the hope that the analyst can be trusted to make
sure that he takes due note of all pertinent information. If the volume of incoming raw material is
very great, however, any assumption of this sort becomes fiction. (p.75)
The second way of handling raw information is much preferable:
the material can be carefully and thoroughly read by a group of analysts whose task is to classify
it, index it, and excerpt important items as may be appropriate. There is a wide range of choice as
to how much should be done through excerpting and digesting and how much through indexing.
And there is a very wide range of possible efficiency or inefficiency according to the control of
this part of the operation. The use of digests, either assembling excerpts on particular topics from
all sources, or digesting all topics from certain sources, can contribute enormously to the fundamental operation of sorting out pertinent statements, and bringing them into juxtaposition in order
to provoke and assist correlation by the trained minds of specialists. This is comparable to the
concentrating of low-grade ores before sending them to the smelter. (p.75)
Sherman Kent, in Strategic Intelligence for American World Policy (1951), titles the three parts of
his book as “I. Intelligence Is Knowledge,” “II. Intelligence Is Organization,” and “III. Intelligence Is
Activity,” and in chapter 7, “Departmental Intelligence” in part II, he examines several essential activities
of “the organizations within certain federal departments and agencies which are devoted to the production
18
of intelligence (knowledge) of what goes on abroad” (p.104). Among these essential activities are “the
library group” about which he says:
As in any institution where research is going forward and where new knowledge is the endproduct, they constitute the keepers of the physical accumulation of knowledge. They take in, as
a result of their own and other peoples’ efforts, the data of yesterday; they index and file it; they
safeguard it; they dispense it to the people who are putting the data together in new patterns and
deriving from it new approximations to truth. (pp.106-107)
In chapter 8, “Departmental Intelligence Organization: Ten Lessons from Experience,” he then discusses
the library function far more extensively (see pp.133-136), talking about what kinds of materials an intelligence library needs to collect, how it should disseminate them, and so forth. But the discussion also
includes the following:
It [the library] indexes all material no matter how acquired on standard 3 x 5 library cards according to place of origin and subject. It gives each document a file number and a place in the central
file. A meaningful indexing operation is the most valuable and costly part of the whole library
business. Unless it is performed, there is no library in the real sense of the word. There exists
nothing more than a formless accumulation of paper. (p.135)
Kent summarizes this section by observing that
A library which operates along these lines will not be arrogating to itself functions which properly do not belong to it…, will be doing a clean and simple service job, and will in time build up a
large volume of indexed materials. Such a collection is one of the most valuable assets of the organization. (pp.135-136)
It is not clear from the context whether Kent is referring in these passages just to yearbooks, gazettes, statistical annuals and directories, foreign newspapers and technical journals, as well as classified documents
such as diplomatic cables and attaché reports, all of which he specifically mentions (see p.134), or also to
the most vital pieces of information anyone in the intelligence organization is collecting. Nonetheless,
Kent’s comments about the critical importance of indexing and cataloguing would presumably apply to
all categories of information which are being collected.
Subsequent studies only occasionally touched on some of these points. For example, in Strategic Intelligence for American National Security (1989), Berkowitz and Goodman remark that
The organization of the analysis components of the intelligence community reflect many of the
beliefs of individuals such as William Langer and Sherman Kent, who were largely responsible
for their development. The first of these beliefs was that the cataloguing and distribution of intelligence data should be controlled by a single, central office, and that all participants in drafting an
19
estimate should have equal access to these data. (p.111)8
And in a subsequent book, Best Truth: Intelligence in the Information Age (2000), Berkowitz and
Goodman address questions of information storage and retrieval (more on their views in part 6 below),
and make the following observations:
Many intelligence officials understand that information technology is developing rapidly and that
the intelligence community must take advantage of it. Indeed, the intelligence community has
been a leader in adopting much of this technology. For example, the CIA’s Directorate of Intelligence (or the “DI,” as it is called in intelligence circles) began to automate its data bases and retrieval systems beginning in the 1970s. One system, SAFE (Support for Analysts’ File Environment), was originally an electronic card catalogue of the intelligence community’s data bases,
which CIA librarians used to search for topical documents and materials. In the early days of
SAFE, the librarians retrieved the materials and shipped them in hard-copy form to analysts. The
CIA gradually upgraded SAFE and its supporting systems, so that by the mid-1980’s, DI analysts
could both search for data and download many of the materials they needed using terminals at
their desks. (pp.63-64)9
B. THE FORMAL ORGANIZATIONAL STRUCTURE OF AN INTELLIGENCE AGENCY
A second theme in the intelligence community involves how particular intelligence agencies should be
structured. As with the literature on the indexing, cataloguing, storage, and retrieval, this is an issue
which has received remarkably little sustained attention in the literature on intelligence organizations. Of
course, the experiences of nations at war led inexorably to almost continuous reorganization of the intelligence agencies, as new capabilities, increasing manpower, expanding geographic scope, and volume of
information to be processed, among other factors, necessitated organizational changes. The historical
study, British Intelligence in the Second World War, cited earlier, is a study of nearly continuous organizational changes. In Volume One, chapter 1, “The Organisation of Intelligence at the Outbreak of War,”
(see Hinsley et al. 1979) we find such remarks as the following:
We must now give fuller consideration to the pressures that were bringing the [military] Service
departments to collaborate with each other and with the Foreign Office in their intelligence activities and, on the other hand, to the obstacles which impeded them.
We have already indicated in general terms the nature of these obstacles and the source of these
pressures. At a time when powerful arguments continued to demand that the different functions
8
Langer was a professor of modern European diplomatic history at Harvard and served in the OSS during WWII as
Deputy Chief and then Chief of the Research and Analysis Branch. He returned to Harvard after the war, but took a
leave of absence in 1950 to organize the office of National Estimates in the CIA.
9
In footnote 16 at the end of this passage, Berkowitz and Goodman refer the reader to an account of the evolution of
communications in the Directorate of Intelligence in its 1996 strategic plan, Analysis: Directorate of Intelligence in
the 21st Century (Washington, DC: Central Intelligence Agency).
20
of intelligence should be kept together under departmental control, within each departmental divison of executive responsibility, equally powerful forces were arising in favour of separating these
functions and creating specialist inter-departmental bodies to perform them. (pp.15-16)
Later in the volume, we find a similar reference to a “recurring struggle” –
the struggle between the principle of concentrating as far as possible in one place the production
of Sigint, and especially the processes connected with high-grade cryptanalysis, and the policy of
dispersing those processes in order that they might be carried out in close proximity to the intelligence staffs who were responsible for judging the significance of the product and to the operational authorities who depended on the judgment of those staffs – was again settled in favour of
the principle of concentration. As the Service departments in the United Kingdom had yielded
their claims to those of GC and CS [the Government Code and Cypher School], however, reclutantly, with the approach of war with Germany, so now they conceded that the Service intelligence staffs in the Middle East must accept in the shape of CBME [Combined Bureau Middle
East] a miniature version of GC and CS which would also be an out-post of GC and CS under GC
and CS’s policy control;… (pp.220-221)
While some of the bureaucratic struggles involved who would have jurisdiction over what kinds of
intelligence-related activities, other struggles involved what “principles” of organization, in the sense of
Gulick (1937), would be used to structure various intelligence organizations. For example, in Volume
One of British Intelligence in the Second World War, Hinsley et al. (1979) observe that:
The Service intelligence directorates continued to be organized, as before the war, largely on geographical lines. The geographical division had perhaps been the appropriate one in peace-time,
when the Services had been required to bring together the various types of intelligence that could
throw light on the military capacities and plans of individual foreign countries. But under war
conditions it had been subjected to increasing strain.
During the first year of the war all three directorates had responded to the new conditions by
setting up specialized or functional sections alongside the main country sections. In the Autumn
of 1940 the Air Intelligence directorate – expanded by then to 240 officers as compared with 40
at the outbreak of war – embarked on a series of reorganizations which by the summer of 1941
were to replace its geographical sub-divisions by functional sections. (p.284)
In particular, the account continues,
What chiefly brought about the reconstruction of AI [Air Intelligence] was the immense importance and mobility of the GAF [German Air Force] – the fact that its every move might be
significant operationally and for the light it threw on Germany’s strategic intentions. When these
considerations made it imperative to centralize all available intelligence about the GAF, and when
much of the increasing wealth of information was of a kind which, like the results of the detailed
research being done at GC and CS, could not be transmitted to the commands, AI was becoming
the unique authority and the commands were depending on its collation and assessment of intelligence about enemy air forces to a far greater extent than had been expected. (p.285)
These organizational choices – centralization vs. decentralization, and region vs. function – became a
rather considerable theme (such as it was for a topic that was otherwise largely ignored) in the intelli-
21
gence literature. For example, Pettee (1946, 69) establishes the central concern about intelligence organizations:
The task of administration of any such organization may therefore be stated thus: One must recruit, discipline, and arrange in a functional order, a number of human minds to the effect that the
result of their combined work will approximate the result which would have been obtained by a
single rational mind, had the task been within the scope of a single rational mind. (p.69)
He then goes on to say that
This principle is clearly implicit in intelligence organization. But the paradox remains that it is
very little understood, that it is scarcely acceptable as a basis for common discussion, frequently
ignored when it is most important, and applied only in a superficial fashion. The tendency of intelligence organizations to undertake reorganizations at intervals of a year, or two or three years,
with the reorganization nearly always taking the form of a shift from “regional” to “functional”
alignment of divisions and sections, or back again (which amounts to the very simple operation of
rotating the organization chart 90º on the center) illustrates how rudimentary is the understanding
of the problem. (p.69)
Kent (1951), in chapter 6 titled “Central Intelligence,” goes considerably beyond Pettee in discussing the problems of organizational structure. For example, he reviews some of the early postwar conflicts
over centralization and decentralization, and then, in discussing the “essential form that central intelligence should take” (p.80), he observes that:
In the last days of the war this argument was at its peak and centered around the basic question as
to whether central intelligence should be a very large operating organization or whether it should
be a kind of holding and management organization. The extreme advocates of the operationorganization idea asserted that an agency which had an almost exclusive responsibility for the intelligence of grand strategy and national security would be the only kind to do a proper job.
Whereas they did not propose to put departmental intelligence completely out of business, they
did urge a central organization, which would conduct on its own, the functions described in current doctrine as collection, evaluation, and dissemination (or as I have defined them – surveillance, research, and dissemination). As such it could not help but envelop (or duplicate) a substantial part of the departmental intelligence functions. It would have a staff of appropriate size:
very large. It would not be a part of a policy-making or operating department or agency of the
government. It would be a vast and living encyclopedia of reference set apart from all such departments and agencies, and devoted to their service. (p.80)
However, Kent went on to say,
At the same time such centralization violates what to me is the single most important principle of
successful intelligence, i.e. closeness of intelligence producers to intelligence consumers or users.
Even within a single department it is hard enough to develop the kinds of confidence between
producers and consumers that alone make possible the completeness, timeliness, and applicability
of the product. There are great barriers to this confidence even when intelligence is in the same
uniform or building or line of work. But how much more difficult to establish that confidence
across the no man’s land that presently lies between departments. It would be too easy for such
an agency to become sealed off from real intimacy with the State, Army, Navy, and Air Force de-
22
partments;…. (pp.81-82)
In Kent’s view, the National Security Act of 1947, which created the CIA and DCI, “tries to meet this
danger in a number of ways” (p.83):
One of them is to reject the idea of the large self-contained operating organization and to establish
an agency primarily for the coordination of departmental intelligence. (p.83)
In the remainder of the chapter (pp.83-103) Kent then describes the characteristics and authority (or lack
thereof) which the CIA and the DCI had over the rest of the intelligence community, which remained
relatively decentralized.10
However, centralization-versus-decentralization was not the only organizational issue: Kent also discussed some of the classical regional-versus-functional choices which the British intelligence efforts in
WWII had had to make. At the beginning of chapter 8, “Departmental Intelligence Organization: Ten
Lessons from Experience,” he posed “Problem No. 1: Should the basic pattern of intelligence organization be regional or functional?” (p.116). He then describes the nature of this problem:
The job of strategic intelligence deals with foreign countries and with the complex of the life of
foreign people. Any people, and especially those of greatest concern to our strategic intelligence,
have many patterns of behavior. They behave as military beings organized into armed establishments, as political beings engaged in putting their formal relations with each other into orderly
form; they behave as economic beings providing their creature wants, and as social, moral, and
intellectual beings giving play to their gregariousness, their consciences, and their minds. Strategic intelligence, which puts peoples under surveillance and investigation, deals with them in both
national and behavioral guises. It deals with them as Frenchmen, Swedes, Russians, and Belgians, and it deals with them also as military, political, or economic beings. Furthermore it deals
with combinations of them, acting, say, in their military or economic guises; Swedes and Russians as economic men in a trade agreement; Britons and Frenchmen as political men looking out
for their joint security. The practical question is, how do you plot your organization so as to deal
best with both the national and the functional phases of foreign existence? (pp.116-117)
10
In light of some of the difficulties between the CIA and the FBI in the years leading up to 9/11 (Riebling 2002), it
is instructive to read Kent’s comments on FBI/CIA relationships. One matter which Kent discusses is the right of
the CIA to inspect and gain access to information in other intelligence agencies and departments, and to use the information
which the departments produce from scratch. Hence the phrase: such intelligence … ‘shall be made available’ to the CIA. (p.87)
But then Kent goes on to say that
The special position which the FBI enjoys among other departmental intelligence organizations is noteworthy. If I read the lines correctly, CIA has no right of inspection in the FBI. When it wants information
which it feels may be possessed by the FBI, CIA must ask for it in writing. In the best of circumstances
this procedure constitutes a barrier between the two organizations, and in circumstances other than the best
it can become an impenetrable wall. (p.87)
As we will see, this was an ongoing issue for many decades (again, see Riebling 2002).
23
What then follows is a lengthy and interesting consideration of the merits of each kind of organizational form. For example, he poses a hypothetical problem of a prospective loan to Iran and whether it
will succeed in its purpose, and he suggests that some strategic intelligence organization is asked what
will be the likely outcome. He then asks,
What sort of organization will best handle the job: an organization which has an Iran section in
command of the project or an organization which has an economic section in command? The argument is virtually endless. The regionalist say that unless you understand the nature of the Iranian, his traditional behavior, the national myths he defers to, and the character of Iranian politics
and society, no amount of theoretical economic analysis will provide the answer. The functionalists, or economists in this case, say that economic considerations override all of these things; that
the Iranian economic problem is not substantially different from any other economic problem;
that their (the economists’) business is the analysis of this universal economic behavior, and that
if the regionalists will loan them some staff to act as translators and legmen they will get on with
the job. (p.118)
But what to do, Kent suggests, is not clear:
Out of this dilemma one thing is plain: you must have people who know a very great deal about
Iran in general (and, I would insist, can read the Iranian language) and people who know the field
of economics. Which of the two groups should have command of the project is by no means so
plain, nor is there a clear answer to the larger question as to whether the whole organization
should be laid down along regional or functional lines. (pp.118-119, emphasis added)
He then observes that both methods of organization are sometimes used as a compromise at the same level, but if so,
The outcome of such compromise is immediate and total administrative chaos. It is an invitation,
and one readily accepted, for major civil war. In those matters which have, say, an economic or
psychological aspect and which also pertain to a group of people (that is, in all matters except
those of unique concern to the functional theorist) regionalists and functionalists will line up in
defense of their special competence, will bicker and snipe, and will often end by producing two
separate analyses which may contradict each other. (p.119)
Kent’s own preference is clear: “The compromise which I find myself supporting is one which uses
the regional breakdown as far as possible” (p.119), and he then describes in some detail how his compromise would work (pp.119-120). He acknowledges that the functionalists will not like this answer:
The compromise which I have advocated will appear to the functionalists as virtually no compromise at all. They will regard it as a distinct victory for the regionalists. But I believe that an
essentially regional pattern should prevail for three reasons. (p.120)
First, he argues, “The business which an intelligence organization must perform is predominantly national
or regional business” (p.120). Second, “The bulk of the primary data coming in or already available in
24
the file or library is from a national source and deals with national or regional problems” (p.120). And
third, “The insights which are jointly reached into the significance of trends in a region will often be more
valuable than what might be called eclectic insights arrived at by merging the work of an economist who
was thinking ‘economics’ and a political specialist who was thinking ‘region’” (p.121). And he concludes that “For these reasons, intelligence organizations which have essayed the non-regional or functional arrangement have found it practically inoperable” (p.121). But even here, he acknowledges that the
choices which have to be made are even more complex, because he then poses and addresses “Problem
No. 2: How to handle matters which defy regionalization?” as well as “Problem No. 3: How to handle
those problems of a multi-national nature for which the organization provides no full-time functional supervisor or coordinator?” (pp.122, 123).
These structural issues have persisted to this day. In Best Truth: Intelligence in the Information Age
(2000), for example, Berkowitz and Goodman report that
As successive DCIs have entered office and rearranged the boxes on the organization chart, the
effects have resembled tidal cycles under the influence of the moon. For example, Stansfield
Turner, DCI under Jimmy Carter, reorganized the CIA’s Directorate of Intelligence from a geography-based organization (offices covering Europe, the Soviet Union, the Far East, etc.) to an issue-based organization (offices covering politics, economics, military developments). Turner also renamed the directorate the “National Foreign Assessment Center.” Four years later, Ronald
Reagan’s DCI, William Casey, changed the organization and its name back to more or less its
earlier form. (p.33)
They also go on to observe that
Lately the trend has been to create special “centers” to support high-priority post-Cold War missions. In many respects, this is a reversion to an issue-oriented organization. Presumably, when
the next crisis in China or the Middle East erupts, we will go back to geography as an organizational motif. (p.33)
Berkowitz and Goodman conclude that there is no one structure which is always the best:
Are any of these organizational schemes better than the others? In truth, an argument can be
made for almost any organizational structure. It depends on the immediate situation. An organization will usually be organized effectively for one type of problem, less effectively for others.
Similarly, centralizing an organization may improve efficiency by eliminating redundancy. But
such “excess capacity” is often what allows an organization to meet requirements that are not important now, but which may become important in the future. Or, to use the modern parlance, an
organization that has been “right sized” for one assignment may be utterly “wrong sized” for another.
In a world in which requirements for intelligence are constantly changing, it is unlikely that organizational fixes will lead to genuine reform. No single scheme can meet the rapidly changing
25
demands for information and improvements in the means for producing it. (p.33)
Later in the book, Berkowitz and Goodman provide an interesting illustration – using the littleknown religious cult, Aum Shinrikyo in Japan, which spread nerve gas in a Japanese subway in 1994,
killing 12 and injuring five thousand – as an example of their argument:
One reason why the Aum went undetected is that it did not fit into the structure of any intelligence organization. No agency had an “Office of Northeast Asian Techno-Terrorist QuasiReligious Cults.” The threat did not match any of the established boxes on the intelligence community’s organization chart or production plan for collecting and analyzing information on terrorism, and so it fell through the cracks. This kind of situation is likely to occur more often as a result of the singular and largely predictable Soviet threat having been replaced by numerous, rapidly generated threats, often originating from unusual sources. (pp.87-88)
In fact, they further use this case and argument as an argument for decentralization:
The intelligence community needs a more effective mechanism for detecting, assessing, and monitoring these unlikely threats. As in the case of the economy, an “invisible hand” will often be a
better cultivator of ideas and allocation of effort.
C. THE ORGANIZATIONAL STRUCTURE OF THE INTELLIGENCE COMMUNITY
The literature and materials discussed in the previous two sub-sections represents most of what we have
been able to discover on the organization of intelligence information and the organization of intelligence
agencies. By far the largest body of empirical information of which we are aware is contained in the multi-volume series already cited, British Intelligence in the Second World War: Its Influence on Strategy and
Operations. However, perhaps precisely because it was written by professional historians, it contains remarkably little that is especially useful in terms of helping the reader understand more general themes and
issues, much less more abstract theories and hypotheses, about the impact of the formal structure of intelligence organizations and the intelligence community on how well the information gathered was assessed
and used. More recent studies, as we have seen, have drawn our attention, in a somewhat more abstract
manner, to organizational issues, but there is apparently little that is especially abstract or especially theoretical or especially rigorous in nature. Unfortunately, the literature on the structure of the country’s
overall intelligence community seems to be even less adequate.
Treverton (2001, 7-8) does provide one interesting observation on why the intelligence community
did come to be organized as it was during the Cold War:
26
With one target [the Soviet Union] and one preeminent consumer [the president], there was a certain logic to the way intelligence was – and is – organized. It was structured according to the different ways intelligence is collected: the National Security Agency (NSA) for intercepting signals, the CIA’s DO [Director of Operations] for spying, and so on. These “INTs,” or “stovepipes” in the language of insiders – SIGINT for signals intelligence and HUMINT for human intelligence, or spying – could each concentrate on the distinct contribution it could make to understanding the Soviet Union. In the process, though, the INTs became formidable baronies in their
own right. (pp.7-8)
For the most part, however, the debates over the overall structure of the intelligence community reflect
the debates, already reviewed above, which have long persisted over the merits of centralization versus
decentralization, closeness to or distance from the president, and so forth. As far as we are aware, there is
nothing approaching any kind of definitive resolution – either theoretical or empirical – about any of these
matters.
III. ORGANIZATIONAL STRUCTURE AND THE ORGANIZATION OF INTELLIGENCE ON 9/11
In the debate over the apparent intelligence failure leading up to 9/11, most of these same organizational
issues have emerged yet again. In October, November, and December of 2002, the House Permanent Select Committee on Intelligence and the Senate Select Committee on Intelligence held what they called a
“Joint Inquiry,” including a number of public hearings, on the 9/11 disaster. The Joint Inquiry staff also
conducted an extensive investigation in 2002, working with the various agencies involved to determine
precisely what had happened, and why, in the years and months preceding 9/11, and at the time of the
October-November-December public hearings Staff Director Eleanor Hill presented a series of public
reports on the staff findings and, ultimately, on staff recommendations for reform. 11
Leaders of a large number of agencies testified at the Joint Inquiry hearings, and many of their arguments reflected the various sides in the debates over the structure of agencies, and the structure of the
overall intelligence community, which we have already reviewed. To cite just a couple of examples, Gen.
William Odom (US Army, Ret.) presented arguments (on 19 October 2002) which were critical of decentralization, noting that
Five organizations run counterintelligence operations with no overall manager – the FBI, CIA,
and the three military services. The parochialism, fragmentation, and incompetence are difficult
11
For reasons of time and space, relatively few of the extensive empirical findings will be reviewed here. We hope
to do this in a future draft.
27
to exaggerate in the US counterintelligence world. This has become publicly clear to anyone following the reporting on the FBI and CIA over the past several months. It is not new. It has long
been the case, right back to World War II and throughout the Cold War. The combination of
fragmentation – which leaves openings between organizations for hostile intelligence operations
to exploit – and lack of counterintelligence skills insures a dismal performance. And terrorists,
like spies, come through the openings.
In contrast, Rear Admiral Lowell E. Jacoby, then Acting Director of the Defense Intelligence Agency, argued (on 17 October 2002, p.3) that:
Our measures of success must lie in the area of effectiveness, not efficiency. While some issues
are prime candidates for cross-community economizing – i.e. distributed or federated analysis,
product deconfliction, strict division of labor – terrorism is not one of them. Some of the potential seams in our defenses may best be closed by overlapping efforts and responsibilities. Terrorism is an issue where competitive analysis is essential; planned duplication and redundancy by
design are virtues.
And Deputy Secretary of Defense Paul Wolfowitz also observed to the congressional panel (19 September 2002) that “I think on the whole we get huge advantages from more decentralization.”
Finally, some well-placed observers seem unwilling to reject any of these arguments. For example,
President Clinton’s National Security Advisor, Samuel (“Sandy”) Burger observed in his statement to the
congressional panel (on 19 September 2002) that:
There are some people who say organizational blocks don’t matter. It’s the people and if you get
the right people in, you’ll get the job done. But I think, in part, that’s true. But a good organizational structure can’t make up for bad personnel. But a good organizational structure can make
good people more efficient at what they do.
For our purposes here, the most interesting set of observations came from Sen. Richard Shelby (RAlabama), Vice Chairman of the Senate Select Committee on Intelligence, in an extensive set of “Additional Views” submitted along with the Joint Inquiry’s “Findings and Conclusions” and “Recommendations” on December 10, 2002.12 Sen. Shelby’s comments were wide-ranging and critical of many different elements of the intelligence community, and we will highlight a few of his arguments which are most
relevant to our concerns.13
One problem which Sen. Shelby discussed was central to the entire Joint Inquiry and his remarks are
12
These materials are all available at http://intelligence.senate.gov/hr107.htm.
Our focus on Sen. Shelby’s views should not be seen as an endorsement or criticism unless otherwise specified.
In future drafts, as noted, we will offer our own overview. We should note that a considerable number of Sen. Shelby’s comments were echoed, in one way or another, by the Joint Inquiry staff reports.
13
28
based on the extensive documentation provided by the Joint Inquiry’s staff reports:
The most fundamental problem identified by the JIS [Joint Inquiry Staff] is our Intelligence
Community’s inability to “connect the dots” available to it before September 11, 2001 about terrorists’ interest in attacking symbolic American targets. Despite a climax of concern during the
summer of 2001 about imminent attacks by Al-Qa’ida upon U.S. targets, the Intelligence Community (IC) failed to understand the various bits and pieces of information it possessed – about
terrorists’ interest in using aircraft as weapons, about their efforts to train pilots at U.S. flight
schools, about the presence in the U.S. of Al-Qa’ida terrorists Khalid al-Mihdhar and Nawaf alHazmi, and about Zacarias’ Moussaoui’s training at a U.S. flight school – as being in some fashion related to each other. (p.23, footnotes deleted)
Some of Sen. Shelby’s criticisms focused on the failure of the CIA to share with the FBI and Immigration
and Naturalization Service (INS) information that it had gathered in the Philippines some time earlier on
terrorists associated with Al-Qa’ida: in particular, two of the terrorists involved in 9/11 – Khalid alMihdhar and Nawaf al-Hazmi – had been identified as linked to Al Qa’ida but the CIA had neglected to
share this information with the FBI or the INS; in particular, the INS could then have “watchlisted” these
two individuals, through what was called the TIPOFF program (a computerized database used for screening applicants for visas to enter the US), and either rejected their visa applications or even allowed them
into the US and immediately taken them into custody (for asserting, incorrectly, on their visa applications
that they had no links to terrorist organizations).
More extensive criticisms were directed at the FBI. Another critical part of the 9/11 story involved
the so-called “Phoenix memo” from an FBI agent in Phoenix who became curious about Middle Easterners, some with known sympathies to Islamic radicals, who were seeking pilot training in the area. Sen.
Shelby then remarked:
The FBI special agent in Phoenix who sent the EC [Electronic Communication] to headquarters
on July 10, 2001, addressed his memorandum to the Usama bin Laden Unit (UBLU) and the Radical Fundamentalist Unit (RFU) within the Bureau’s counterterrorist organization. Headquarters
personnel, however, decided that no follow-up was needed, and no managers actually took part in
this decision or even saw the memorandum before the September 11 attacks. The CIA was made
aware of the Phoenix special agent’s concerns about flight schools, but it offered no feedback despite the information the CIA possessed about terrorists’ interest in using aircraft as weapons.
Nor did the new FBI officials who saw the Phoenix EC at headquarters ever connect these concerns with the body of information already in the FBI’s possession about terrorists’ interest in obtaining training at U.S. flight schools. The full contents of the “Phoenix Memo” have yet to be
made public, but it is astonishing that so little was made of it, especially since it drew readers’ attention to certain information already in the FBI’s possession suggesting a very specific reason to
be alarmed about one particular foreign student at an aviation university in the United States.
(pp.29-30, footnotes deleted)
29
One part of the explanation for these difficulties within the FBI involved the Foreign Intelligence
Surveillance Act (FISA) of 1978 which governed what information could be passed between the FBI and
the intelligence community. Over the years, what had become known as “the wall” had been created,
based on FBI understandings of what FISA specified about what information could and could not be legally shared with other agencies intelligence and law enforcement agencies. These legal obstacles were a
major part of the difficulty Minneapolis FBI agents had in gaining legal access to the laptop computer of
Zacarias Moussaoui, an Islamic radical who was in Minneapolis seeking flight training and who had been
taken into custody by the FBI. These problems of gaining access to, and then sharing with other agencies,
critical information attracted criticism. Said Sen. Shelby, for example:
In addition to problems stemming from presumed legal obstacles to passing crucial information
from the Intelligence Community to law enforcement, the events of September 11 highlighted the
problems of passing information in the other direction: from law enforcement to the Intelligence
Community. Throughout the 1990s, for instance, the Justice Department, the FBI, and the offices
of various U.S. Attorneys around the country accumulated a great deal of information about AlQa’ida and other terrorist networks operating within the United States. This information was derived from law enforcement investigations into such events as the 1990 assassination of Rabbi
Meier Kahane, the 1993 World Trade Center bombing, the abortive plot to blow up various harbors and tunnels in New York City [the ‘Day of Terror’ plot], the 1996 Khobar Towers attack [in
Saudi Arabia], the 1998 U.S. embassy bombings [in Kenya and Tanzania], Al-Qa’ida’s “Millenium Plot,” and the attack on the USS Cole in October 2000 [in Yemen]. Most of this information,
however, remained locked away in law enforcement evidence rooms, unknown to and unstudied
by counterterrorism (CT) analysts within the Intelligence Community. (p.56)
Sen. Shelby then went on to make the following comments about this information:
That this information possessed potentially huge relevance to the Intelligence Community’s CT
work is beyond question. Indeed, until the late 1990s, at least, U.S. law enforcement offices
probably had more information on Al-Qa’ida – its key members operating in the West, its organizational structure, and its methods of operation – than the CIA’s CTC [Counter-Terrorism Center]. Two CT specialists from the Clinton Administration’s National Security Council later described court records from 1990s terrorism trials as being “a treasure trove” that contained “information so crucial that we were amazed that the relevant agencies did not inform us of it while
we were at the NSC.” A small office within the Office of Naval Intelligence, for instance, began
a whole new field of inquiry into terrorist maritime logistics networks in the summer of 2001 on
the basis of a single FBI interview (a “form 302”) and the public court transcripts from the 1998
embassy bombings trials in New York, long before anyone had even tried systematically to
“mine” law enforcement records for intelligence-related information. That most such law enforcement information remained off limits to intelligence analysts before September 11 is terribly, and perhaps tragically, unfortunate. (p.57, footnotes deleted)
Sen. Shelby then focused on one aspect of the FBI that is a fascinating aspect of how organizational
30
structure and problems of storage, cataloguing, and indexing (as previously discussed here) inhibited information retrieval by the FBI and other agencies as well. In a section titled “Tyranny of the Casefile”
(pp.62-74), Sen. Shelby began by noting that the FBI is fundamentally a law enforcement organization:
“its agents are trained and acculturated, rewarded and promoted within an institutional culture the primary
purpose of which is the prosecution of criminals.” As a consequence, he continued, “Within the Bureau,
information is stored, retrieved, and simply understood principally through the conceptual prism of a
“case” – a discrete bundle of information the fundamental purpose of which is to prove elements of
crimes against specific potential defendants in a court of law” (p.62). This has systemic effects, suggested Sen. Shelby:
The FBI’s reification of “the case” pervades the entire organization, and is reflected at every level
and in every area: in the autonomous, decentralized authority and traditions of the Field Offices;
in the priorities and preferences given in individual career paths, in resource allocation, and within the Bureau’s tatus hierarchy to criminal investigative work and post hoc investigations as opposed to long-term analysis; in the lack of understanding of and concern with modern information
management technologies and processes; and in deeply-entrenched individual mindsets that prize
the production of evidence-supported narratives of defendant wrongdoing over the drawing of
probabilistic inferences based upon incomplete and fragmentary information in order to support
decision-making. (p.62)
He then argued that
Particularly against shadowy transnational targets such as international terrorist organizations that
lack easily-identifiable geographic loci, organizational structures, behavioral patterns, or other information “signatures,” intelligence collection and analysis requires an approach to acquiring,
managing, and understanding information quite different from that which prevails in the law enforcement community. Intelligence analysts tend to reach conclusions based upon disparate
fragments of data derived from widely-distributed sources and assembled into a probabilistic
“mosaic” of information. They seek to distinguish useful “signals” from a bewildering universe
of background “noise and make determinations upon the basis of vague pattern recognition, inferences (including negative inferences), context, and history. For them, information exists to be
cross-correlated – evaluated, and continually subjected to re-evaluation, in light of the total context of what is available to the organization as a whole. Intelligence analysts think in degrees of
possibility and probability, as opposed to categories of admissibility and degrees of contribution
to the ultimate criminal-investigative aim of proof “beyond a reasonable doubt.” (pp.62-63)
Having advanced these arguments about the centrality of the “case file” in the FBI, Sen. Shelby then
analyzed several of the 9/11 events from this perspective. He began this section (on p.63) by noting that
the Joint Inquiry Staff (JIS) had determined that the FBI knew that convicted terrorist Abdul Hakim Murad “had been involved in an extremist Islamic plot to blow up 12 U.S.-owned airliners over the Pacific
31
Ocean and crash an aircraft in to CIA Headquarters.” However, he continued,
Murad was not charged with a crime in connection with the CIA crash plot, apparently because it
was merely at the “discussion” stage when he was apprehended. Because the CIA crash plot did
not appear in the indictment, however, the FBI effectively forgot all about it.
As the JIS has recounted, the FBI’s case file for the Murad case essentially ignored the air crash
plot, and FBI agents interviewed as part of our inquiry confirmed that Murad’s only significance
to them was in connection specifically with the crimes for which he was charged: “the other aspects of the plot were not part of the criminal case and therefore not considered relevant.” Convinced that the only information that really matters was information directly related to the criminal investigation at hand, the FBI thus ignored this early warning sign that terrorists had begun
planning to crash aircraft into symbols of U.S. power. Thus, rather than being stored in a form
that would permit this information to be assessed and re-assessed in light of a much broader universe of information about terrorist plans and intentions over time, the Murad data-point was
simply forgotten. Like all the other tidbits of information that might have alerted a sophisticated
analyst to terrorists’ interest in using airplanes to attack building targets in the United States, the
episode disappeared into the depths of an old case file and slipped out of the FBI’s usable institutional memory. (pp.63-64, footnotes deleted)
In fact, Sen. Shelby went on to say,
As the JIS has recounted, the FBI for years has tracked terrorism information in ways that essentially prohibit broad, cross-cutting analytical assessment. If it identified a suspected terrorist in
connection with a Hamas investigation, for example, the FBI would label him as a Hamas terrorist and keep information on him in a separate “Hamas” file that would be easily accessible to and
routinely used only by “Hamas”-focused FBI investigators and analysts. The Usama bin Laden
unit would be unlikely to know about the FBI’s interest in that individual, and no one thought to
establish a system for cross-referencing terrorist connections between the carefully-segregated institutional files. (p.64, footnote deleted)
This approach, Sen. Shelby concluded, “is entirely unsuited to virtually any long-term strategic analytic
work, and is patently inappropriate to counterterrorism analysis against the loose, interconnected and
overlapping networks of Islamic extremists that make up the modern jihadist movement” (pp.64-65).
And these problems were all heightened, in Sen. Shelby’s views, by a further aspect of the FBI:
The FBI’s decentralized organizational structure contributed to these problems, in that it left information-holdings fragmented into largely independent fiefdoms controlled by the various field
offices. The New York Field Office for years played the principal counterterrorism role within
the FBI simply because it had the misfortune of hosting the 1993 World Trade Center attacks,
thereby acquiring a degree of experience with Islamic fundamentalist terror groups. Even so, this
work focused upon terrorism cases – not strategic analysis – and the FBI’s decentralized structure
left other field offices in the dark. As the JIS concluded, there was even great “variation in the
degree to which FBI-led Joint Terrorism Task Forces (JTTFs) prioritized and coordinated field
efforts targeting Bin Ladin and al-Qa’ida,” and “many other FBI offices around the country were
unaware of the magnitude of the threat.” (p.65, footnote deleted)
Exacerbating these problems was what Sen. Shelby called the FBI’s “Technological Dysfunction”
(p.71): as he put it, “In addition to these cultural and organizational problems – or perhaps in large part
32
because of them – the FBI has never taken information technology (IT) very seriously, and has found itself left with an entirely obsolete IT infrastructure that is wholly inadequate to the FBI’s current operational needs, much less to the task of supporting sophisticated all-source intelligence fusion and analysis”
(p.72). He then continued,
The handling of the Phoenix EC demonstrates some of these technological deficiencies, highlighting the “limitations in the electronic dissemination system” that kept FBI supervisors from
seeing the document even when it was addressed to them. According to the JIS, the problems
with the Phoenix EC “are consistent with the complaints we have repeatedly heard throughout
this inquiry about the FBI’s technology problems.” The Bureau’s electronic system for disseminating messages such as the Phoenix EC was itself “considered so unreliable that many FBI personnel, both at the field offices and at FBI headquarters, use e-mail instead.” Since most offices
at the FBI lack a classified e-mail capability, this represents a fundamental obstacle to information-sharing of even the most rudimentary sort. Moreover, as users have fled the dysfunctional
case-tracking system, the Bureau appears to have lost any ability to track leads entered into it.
The JIS, for instance, was told that “there are 68,000 outstanding and unassigned leads assigned
to the counterterrorism division dating back to 1995.” At the time of our Inquiry, the FBI had no
idea whether any of these leads had been assigned and dealt with outside the electronic system.
(p.72, the material in quotation marks is from the Joint Inquiry Staff statement of 24 September
2002)
Sen. Shelby then concluded this particular section with the following:
Being able to know what one knows is the fundamental prerequisite for any organization that
seeks to undertake even the most rudimentary intelligence analysis. The FBI, however, has repeatedly shown that it is unable to do this. It does not know what it knows, it has enormous difficulty analyzing information when it can find it, and it refuses to disseminate whatever analytical
products its analysts might, nonetheless, happen to produce. The Bureau’s repeated failures in
this regard – despite successive efforts to reorganize its national security components – have led
many observers to conclude that “mixing law enforcement with counterintelligence” simply cannot work. As one former director of the National Security Agency has suggested, “cops” cannot
do the work of “spies.” This insight, in turn, has led to widespread public debate over the need
for radical structural reform – including removing the CI [counterintelligence] and CT [counterterrorism] functions from the FBI entirely. (p.74, footnote deleted)
To the extent that Sen. Shelby’s observations here are valid, it thus emphasizes yet again the complex nature of the interactions of the overall structure of the intelligence community, the structure of individual agencies, and the manner in which information is indexed and catalogued within each of the agencies. So the problem of organizational design is to confront and manage what Chan (1979, 175) noted
about the nature of warning signs about surprise attack, as quoted earlier: “In the real world of strategic
analysis, warning signals are usually scattered across individuals and bureaucratic units.” But deciding
how to confront and manage these problems first requires learning how to think about these problems.
33
This is a problem which we will consider in the next section.
IV. ORGANIZING INFORMATION WHEN IT IS A VECTOR OF ASPECTS
Every bit of data that is to be processed by some intelligence organization is characterized by an essentially infinite number of what might be called “aspects” or “features.” For example, any bit of data was collected (a) in some manner (e.g., was it collected by a high-tech listening device or by a low-tech, fleshand-blood human being?), (b) at some place (e.g., was it collected in the U.S. or in Saudi Arabia?), (c) at
some time (e.g., was it collected yesterday, last week, last month, or last year?); aspects (d), (e), (f), and
so forth each contains something relevant to a particular country, a particular activity, a particular person,
and so forth. In a sense, then, each “bit” of intelligence data is actually a whole vector – a list – of data.
That is, each bit of data contains some “content” and a number of other things that are associated with the
bit of data (how collected, where collected, when collected, etc.). The question is: how does an intelligence agency’s structure interact with information, seen in these terms?
To illustrate the nature of this interaction, consider the illustration in Figure 1. Assume there is an
organization, under the leadership of a Director, and that below the director there are two hierarchical levels. Assume that the middle level can be organized on either a functional basis or a regional basis. Assume that there are three different functions, labeled f1, f2, and f3, and assume that these are military, economic, and political matters respectively. Assume also that there are three different regions, labeled r1, r2,
and r3, and assume that these are Saudi Arabia, Iraq, and Indonesia respectively. Figure 1A shows what
would normally be called a functional organization: the middle level is divided into three functional divisions, and within each of the functions there are three regions. Figure 1B shows what would normally be
called a regional organization: the middle level is divided into three regional divisions, and within each of
the regions there are three functions. As we can see, each structure thus has nine bottom-level officials,
labeled O1 through O9.
[Figure 1 about here]
Next, assume that each of the nine bottom-level officials specializes in the collection of a particular
34
kind of information. Thus, in the functional structure in Figure 1A official O1 specializes in the collection
of data with the properties f1r1: this means that O1 collects data on function 1 in region 1, which would
mean here that O1 collects information about military matters in Saudi Arabia. Similarly, official O2 specializes in the collection of data with the properties f1r2 (that is, military matters in Iraq), official O3 specializes in the collection of data with the properties f1r3 (military matters in Indonesia), official O4 specializes in the collection of data with the properties f2r1 (economic matters in Saudi Arabia), and so forth,
with official O9 specializing in the collection of data with the properties f3r3 (political matters in Indonesia).
The 3x3 table in Figure 1 specifies what each of these nine bottom-level officials actually observes.
In each of the nine boxes in the table, the entry in the table corresponds to the “content” of the information that the indicated official observed. Thus, since O1 specializes in observing information with the
f1r1 properties (that is, military information in Saudi Arabia), he observes information with content VM.
Similarly, since O2 specializes in observing information with the f1r2 properties (that is, military information in Iraq), he observes information with content QX. The last official, O9, specializes in observing
information with the f3r3 properties (that is, political information in Indonesia), he observes information
with content TA.
Assume now that each division director requests his three subordinates to pool their data to see if any
important message can be inferred. Figure 1A thus shows what information the three officials in each
functional division would see. For example, the officials in the military functional division (f = 1) would
see information with content VM, QX, and TA, the officials in the economics functional division (f = 2)
would see information with content YD, LF, and CK, and the officials in the politics functional division (f
= 3) would see information with content HB, ZS, and AT. Note that however each division’s three bits of
data are arranged or rearranged, there is only gibberish: there is no meaningful message (in English)
which can be inferred. For example, in the military functional division, the information can be arranged
in the following six ways – (1) VM-QX-TA, (2) VM-TA-QX, (3) QX-TA-VM, (4) QX-VM-TA, (5) TAQX-VM, and (6) TA-VM-QX – and none of these six rearrangements yields any meaningful message.
35
The same holds true for the economics and politics functional divisions as well: none of the six rearrangements of their respective three bits of data yields any meaningful message. So in this structure, no
meaningful signal is inferred from the information that each functional division has gathered and assessed.
In contrast, consider Figure 1B. When their information is pooled, the officials in the Saudi Arabia
regional division (r = 1) would see information with content VM, YD, and HB, the officials in the Iraq
regional division (r = 2) would see information with content QX, LF, and ZS, and the officials in the Indonesia regional division (r = 3) would see information with content TA, CK, and AT. Note that no matter how the Saudi Arabia and Iraq regional divisions rearrange their information (six possible rearrangements for the Saudi Arabia division and six for the Iraq division), there is only gibberish: there is no
meaningful message which can be inferred. But now consider the Indonesia regional division: these officials see TA, CK, and AT, and this information can be arranged (interpreted) in the following six ways:
(1) TA-CK-AT, (2) TA-AT-CK, (3) CK-AT-TA, (4) CK-TA-AT, (5) AT-CK-TA, and (6) AT-TA-CK. For
the first five ways of interpret the data, nothing meaningful can be inferred. But for the sixth way of interpreting the data, a meaningful message does emerge: ATTACK!
In sum, given the data structure in the table in Figure 1, the officials in each of the functional divisions in Figure 1A would draw no meaningful inferences from the data: only gibberish – meaningless
“noise” – would be observed. However, while the officials in two of the regional divisions in Figure 1B
would also fail to draw any meaningful inferences from the data (they too would observe meaningless
“noise”), the officials in the third regional division would now draw an alarming inference: an attack may
be on the way! Hence, we draw the conclusion that how an intelligence organization is structured can
have a profound influence on whether the organization draws the proper inference from some otherwise
“noisy” body of data. In particular, whether the “signal” is separated from the “noise” is partly a function
of the organization structure in which intelligence officials attempt to draw inferences from the raw data
which they collect.
V. AN EMPIRICAL ASSESSMENT USING LIBRARY CATALOGUES
36
An earlier paper titled, “Toward a General Theory of Hierarchy: Books, Bureaucrats, Basketball Tournaments, and the Administrative Structure of the Nation-State” (Hammond 1993) called attention to our
tendency, as frequent users of libraries, to seek out some target book and then, once that book is in hand,
to browse back and forth along the nearby shelves, just to see what else we may find that is unknown or
unexpected to us but which may nonetheless be interesting and potentially useful to us. Academics do
this all the time, and we have all learned some important things in our research because of this habit of
ours. Indeed, we all persist in this habit precisely because we learn good things from it often enough to
maintain it as a habit.
But notice that what we are doing as we browse is using a search heuristic (ala Herbert Simon): we
are finding some target book in which we think we are interested, and then engaging in a search in the
local “neighborhood” of our target book. That is, we are using a heuristic which tells us to conduct a
“local” proximity-guided search; we are most definitely not engaging in some kind of “global” search for
new and relevant materials.
That 1993 paper pointed out that library cataloguing systems – such as the Library of Congress catalog and the Dewey Decimal catalog – are hierarchies: that is, in each kind of catalog the book titles are
arranged in a vast and detailed hierarchy. Indeed, one can look at such a catalog as involving the hierarchical organization of knowledge. In particular, each catalog system involves the creation of a vast set of
nested categories, and once this set of categories has been created, books are then assigned by professional catalogers to these various categories or sub-categories or sub-sub-categories, and so forth. Since these
two cataloguing systems are not the same, this suggests that each catalog involves a different hierarchical
organization of knowledge.
Given this general perspective, I then conjectured that if we were to select some “target” book, and
then were to examine the books on either side of the target book, in both the Library of Congress and
Dewey Decimal catalogs, we would find somewhat different sets of books surrounding our target book.
For example, if we were to look at the 25 books to the left of our target book, and the 25 books to the
right of our target book in a Library of Congress catalog,and then compare these 50 books with the 50
37
books to the left and right of our target book in a Dewey Decimal catalog, we might find that we have
somewhat different sets of books. That is, our heuristic searches in the two different hierarchies might
lead us to two somewhat different sets of books. The paper then suggested that, to the extent that these
two sets of books differ from one catalog to the other, we browsers would thus actually learn different
things about our research subject from working within the two different kinds of catalogs.
We have tested this conjecture, and our empirical results are striking: the structure of the hierarchy
seems to have a powerful impact on the results of local, proximity-guided search.
For this test, we began by selecting some 40 “target” books from 7 different subfields within political
science; see the book titles listed in Table 1. We then selected two university libraries for comparison:
the Michigan State University library is a Library of Congress library, and the Northwestern University
library is largely a Dewey Decimal library.14
[Table 1 about here]
Electronic catalogs were used for the two libraries; in this way, we did not have to worry about
books being checked out and thus not being available to be “next” to each other on the actual shelves. In
effect, then, we would begin with a target book in one of the electronic catalogs, then list the 25 books “to
one side” of the target book in the electronic catalog, and the 25 books “to the other side” of the target
book. We would then go to the other catalog and then list the 25 books “to one side” and the 25 books
“to the other side” the target book in that catalog. The extent of the common membership – the overlap –
between these two sets of 50 books was the variable of interest.15
The results for the classics in Public Administration and Organization Theory, shown in Table 1 are
rather striking: there is very little overlap. In fact, for two of these books – Herbert Simon’s Administra-
Kyle Jen conducted a preliminary survey three years ago, as part of a paper he wrote for Hammond’s seminar in
organization theory, and Ko Maeda greatly expanded and strengthened the empirical analysis. The results presented
here stem from Maeda’s painstaking labor. See Appendix A for more on collection techniques.
15
It was critical, of course, to ensure that each library actually possessed all 50 of the books that were next to each
target book in the other library. This took very careful back-and-forth cross-checking. If some book in the initial
list of 50 was simply not in the other catalog, the “next” book farther out on the initial list had to be examined for
common ownership, and so forth. Results reported here are thus based on books in common ownership. The work
was primarily conducted in the spring and summer of 2001. It took roughly 4 or 5 hours to conduct an analysis of
each target book. Since both libraries have undoubtedly expanded their holdings since then, any re-checks of these
data may differ somewhat, though our guess would be that the differences would be at most only slight.
14
38
tive Behavior and Anthony Downs’s Inside Bureaucracy – we even looked at the 400 nearest books, involving the 200 books on either side. For Downs’s Inside Bureaucracy, there were only 46 overlaps
among the 400 nearest books; see Figure 2. And amazingly enough, for Simon’s Administrative Behavior, there were no overlaps among the nearest 400 books.
[Figure 2 about here]
Table 1 also provides some statistics for each of the subfields of political science that we examined.
We suspect that the reason that American Politics has a relatively high overlap rate (mean = 7.2), compared to the other subfields, is that the field of American Politics, in both the Library of Congress and
Dewey Decimal catalogs, is structured by the major institutions of American government: the presidency,
the House, the Senate, and the Supreme Court. With these kinds of common categories, the probability of
overlap may well be substantially higher. In contrast, the subfield of Public Administration and Organization Theory has the lowest overlap rate (mean of 1.4), which lends credence to the complaint that our
subfield lacks a broadly-accepted conceptual clarity.
Tables 2 provides some summary statistics on the results. Note that the modal overlap is zero, and
that the mean overlap is only 3.0. Even with the differences in overlap rates which seem characteristic of
different subfields, the general extent of overlap seems remarkably low. In other words, how knowledge
in general is organized in these two cataloguing systems seems to be almost completely different.
[Table 2 about here]
Now consider what this result might mean for intelligence organizations. Let us reinterpret the 50
books surrounding some target book as 51 bits of data (the target book plus the 50 others around it) which
contain an important message about an impending surprise attack. That is, if the intelligence officials in
some office were to analyze and assess these 51 bits of data, they would draw an inference that an attack
is on the way. The critical organizational question is this: will the officials in an intelligence agency
which is structured differently from the first one see these same 51 bits of data and thus be able to draw
the same inference about the likelihood about an impending attack. For most of the 40 cases we examine
the answer seems to be that the officials in the second agency would not draw the same inference: these
39
51 critical pieces of data are scattered all around the organization, and no one (save the director of the
overall organization, who would presumably be much too busy) would be in a position to assess these 51
critical pieces of data and draw the proper inference from them.
In sum, the message of our simple example in Figure 1 seems to have some degree of empirical confirmation: one kind of structure may “concentrate” critical pieces of information in one office where important inferences can be drawn, whereas another kind of structure may “disperse” the same critical pieces of information throughout the organization in such a way that no inferences at all are drawn (other than
the inference that “Nothing seems to be afoot…”).
But what do these results about library catalogues mean for real-world intelligence agencies? It may
be, of course, that these two library catalogs do not necessarily tell us anything about what categories are
used to categorize information, or how particular pieces of information are catalogued and indexed in real-world bureaucratic organizations. So the next key question is this: what we might expect to find in real-world bureaucratic organizations? We have no way of answering this question. But the very fact of
what we began this paper with – the observation that intelligence failures often seem to occur despite the
fact that defenders had, in their possession prior to a successful attack, a considerable amount of evidence
suggesting that an attack was imminent, but that this critical information was scattered around the country’s intelligence community and not adequately pulled together and properly assessed in any one organizational location – would seem to provide at least some support for our general argument. At the very
least, we are developing a more refined theoretical understanding of what seems to be a significant empirical observation.
VI. ORGANIZATIONAL DESIGN IN A WORLD OF UNCERTAIN AND UNPREDICTABLE THREATS
We have now presented both theoretical and empirical reasons for thinking that organizational structure
can play a critical role both in “concentrating” critical data in the hands of an analyst who could then recognize and identify patterns in the data and in “dispersing” these critical data so that it is, at the least, far
more difficult for any one analyst, or small group of analysts, to recognize that there is any meaningful
40
pattern in the data.
Of course, the director at the top of the organization could, in principle, serve as “the” official who
could recognize any pattern in the data. But in real-world organizations, the director has an unavoidable
“overload” problem: in an intelligence community of any size and importance, it would be literally impossible for the director himself to read, digest, assess, and interpret all of the raw data that might have a
bearing on a national-security problem of interest. This problem must invariably be delegated to subordinates. The director may be called upon to render a final judgment on what the lower-level analysts think
they have learned, and to decide whether the country’s chief executive needs to be alerted, but the basic
reading and interpreting of the data need to be done at lower levels.
So the key question for the design of the intelligence community, and of the organizations within it,
is this: what design will have the highest probability of concentrating the intelligence data in such a way
that surprise attacks can be avoided. Our pessimistic conjecture is that there is, in principle, no one design that can be guaranteed to do this the best. Instead, what appears to be critical is what we might call
the “structure” of the critical data, that is, how the critical data are distributed and dispersed among the
rest of the data (i.e., among the “noise”). The “best” structure can thus be identified only in relation to the
structure of the critical data. If the critical data are organized regionally, for example, as with the data
table in Figure 1 (note that the critical data are all contained in column r3) , then the regional structure, as
in Figure 1B, will turn out to be the structure most appropriate for inferring meaning from the set of intelligence data. But if the critical data are organized functionally, then the functional structure as in Figure
1A might be better. For an example, simply take the data table in Figure 1 and reverse the labels of the
columns and rows; thus, the table’s f1 row of data – VM, QX, TA – would now be labeled the r1 row, the
f2 row would become the r2 row, and the f3 row would become the r3 row, and the r1 column of data –
VM, YD, HB – would be labeled the f1 column, the r2 column would become the f2 column, and the r3
column would become the f3 column. With this transformed data table, the functional structure in Figure
1A would now be the one which would identify the critical AT-TA-CK signal; the regional structure in
Figure 1B would now be the one which sees only meaningless gibberish.
41
This perspective does suggest that there are some clear benefits to at least some degree of centralization: it is critical to get all the critical data on the same “desk” – or at least in the set of offices falling under a common mid-level superior – at the same time, and to the extent that a decentralized structure fails
to do this, “competition” and “competitive analysis” (George 1972, 1980) will be based on incomplete
data sets.16 The unstated presumption on which recommendations for competition are based is that each
of the competing agencies has access to all of the data. However, as a number of critics of competition
and redundancy have also remarked, there are likely to be some dysfunctional incentive effects which
involve not sharing “one’s own” data, given the equation of “secret data” with “organizational power.”
Nonetheless, it is not adequate just to say that “centralization is best” (or something to that effect).
The reason is that the recommendation of “centralize” itself covers a wide variety of possible structures
which are equally centralized. For example, notice that the agency in Figure 1 is “centralized,” in the
sense that there is a single director who oversees all of the intelligence-gathering activities which take
place, yet there are (at the very least) two different kinds of “centralized” structures which are possible –
the functional structure and the regional structure as shown.
A problem which further complicates identification of the “best” structure is that there are likely to
be several critical national-security problems facing the intelligence community. During World War II,
the British could focus almost single-mindedly on the collection and assessment of information about
Germany (though British involvement in the Far East, especially in India and Burma, did require some
intelligence assets). Similarly, during the Cold War the United States could focus almost single-mindedly
on the collection and assessment of information about the Soviet Union (though with some attention devoted to mainland China). In either case, one might have entertained the possibility that there existed
“one best structure.” As we write this, however, terrorist attacks by Al-Qa’ida remain a serious concern,
but the United States is also have involved in Iraq, and relations with Russia, mainland China, North Korea, and Iran are also very important. This raises the disturbing possibility that a structure best suited to
16
This raises an interesting design tradeoff: what are the relative virtues of (a) a structure which satisfactorily centralizes the critical data but which has some dysfunctional incentive effects, versus (b) a structure which does not
centralize the data in an entirely satisfactory manner but which has better incentive effects? It is not yet clear, a pri-
42
the collection and assessment of data on one problem (e.g., Al Qa’ida) may be ill suited to the collection
and assessment of data on other problems (e.g., Iraq or North Korea).
Gregory Treverton, in his Reshaping National Intelligence for an Age of Information (2001), mentions this general kind of problem, and implicitly suggests that there are designs which can effectively
deal with it:
Now, however, no corporation would organize itself this way given its business, its production
processes, and its market. The old structure just has to be wrong. Now there are many targets
and many consumers, though there are some consistent alignments among targets, customers, and
collectors. In these circumstances, a firm would organize around lines of business, establishing a
distributed network or a loose confederation in which the different parts of intelligence would endeavor to build very close links to the customers each served. (p.8)
Analysis and recommendations by Berkowitz and Goodman, in Best Truth: Intelligence in the Information Age (2000), also suggest the same perspective in a discussion of the legacy which the Cold War
left for the intelligence community, which they saw as “trapped in its traditional model for producing intelligence”:
It is hierarchical, linear, and isolated. This model worked well in the Cold War, when the United
States had a single paramount threat – the Soviet Union – that changed incrementally. However,
to monitor a world in which threats change and can appear suddenly from unexpected quarters,
the intelligence community needs a more flexible system. The organization should be able to reconfigure itself as needs change. It should be able to draw on expertise and information, wherever it may reside, whenever the need arises. It should be able to maintain multiple lines of communication. (p.63)
Somewhat later, in further discussing problems with “the traditional bureaucratic model” (p.72), they note
that one of the signs
that something was wrong with the tradition model was that intelligence officials would not rely
on their standing organizations when a really important issue came along. Instead, they would establish ad hoc “task forces” and “intelligence centers.” During the past decade, for example, the
intelligence community has created new intelligence centers to cover critical issues such as proliferation, counterterrorism, narcotrafficking, and other special topics. The intelligence centers
bring together the specific analysts and collectors needed to address a specific issue. The centers
are also intended to connect intelligence consumers directly to intelligence producers. Indeed, establishing new organizations for high priority assignments has become a reflex action… (p.73)
In subsequent pages, Berkowitz and Goodman then discuss how private corporations routinely create such
ad hoc organizations as a routine matter, and they emphasize how the fluid nature of such groups is “con-
ori, which is the best way to go; some rigorous theoretical investigations would probably be required to provide a
43
trary to the usual rules of structure and hierarchy” (p.77). In fact, they develop a vision of how a “decentralized, market-based, fluid model for intelligence” (p.92) would work and then subject it to a critique to
determine how well it could be expected to work and what problems it might exhibit.
It is important to note that Berkowitz and Goodman (2000) also address critical issues in “compartmentation” and secrecy within the intelligence community itself. “Compartmentation” involves the tight
control of secret information, so that only those who are deemed to have a “need to know” are given access to particular kinds of data and collection methods. In the vision they present of a decentralized intelligence community, this issue of compartmentation and secrecy is critical: whatever virtues decentralization might have, it cannot work if secret information is tightly compartmented.
It seems, then, that some fundamental dilemmas are involved in the design of the intelligence community and intelligence organizations within that community. The creation of one single overall dataset
of intelligence-related information might be seen as a profoundly centralizing act: individual agencies, or
subunits within agencies, are no longer allowed to withhold “their” information from this centrallymandated dataset. Yet the very existence of this single overall dataset would make possible a profoundly
decentralized community if everyone within the community were given access to the dataset. Indeed, a
substantial amount of the traditional hierarchy would be eliminated, and if hierarchical structures cannot
be “neutral,” as argued in Hammond and Thomas (1989) for policymaking and as illustrated by our example in Figure 1 for information aggregation and assessment, then a major reason for any institutionallybased biases would be eliminated.
For example, one wonders what would have happened on 9/11 if the FBI agents in Phoenix and
Minneapolis had not had to rely on higher-level authorities (who proved unresponsive) but could have
dived into a computerized dataset containing all intelligence-related information, in order to determine if
their fears and suspicions were well grounded. As it was, they had no such access, and certainly within
the FBI, no such dataset existed. But even if an overall dataset does exist but access to the information
within it remains compartmented (i.e., only some analysts can gain access to particular kinds of infor-
satisfactory answer.
44
mation), then the intelligence community or intelligence organization remains hierarchical, in effect, and
so the biases due to the intrinsic nature of the hierarchy would also remain. So secrecy and compartmentation, which have long been argued to be essential to the preservation of the secrets, constitute a fundamental barrier to a neutral structure for the intelligence community.17
17
Even if everyone were given access to the central dataset, it would be an interesting question as to whether the
structure of the dataset itself would bias what is learned from it. For example, is the dataset built around keywords,
so that an analyst would be able to conduct an all-dataset search only by investigating the keywords themselves?
The critical issue here would be whether some “new” national-security problem which arises has already been described by some keyword, which would then give analysts a clue as to where to look in the dataset for essential information. A dataset built around keywords would thus retain a kind of built-in bias, due to the inevitable inadequacy of old keywords for new problems which may arise. It thus would seem preferable that all information be in the
dataset in undigested form, so that the analyst with a new problem could search the entire dataset for documents, etc.
that contain words or signals of particular interest.
45
FIGURE 1: TWO ALTERNATIVE STRUCTURES FOR AN INTELLIGENCE AGENCY
Content of Dataset
Military function
Economic function
Political function
f1
f2
f3
Saudi Arabia region
r1
VM
YD
HB
Iraq region
r2
QX
LF
ZS
Indonesia region
r3
TA
CK
AT
A. Functional Structure
Director
Functi onal
Di visions
Regional
Offices
f =2
f =1
r= 1
VM
2
QX
3
TA
1
YD
2
LF
f =3
3
CK
1
HB
2
ZS
3
AT
B. Regional Structure
Director
Regional
Di visions
Functi onal
Offices
r= 2
r=1
f= 1
VM
2
YD
3
HB
1
QX
2
LF
r=3
3
ZS
1
TA
2
CK
3
AT
46
FIGURE 2: OVERLAP RESULTS FOR THE 400 NEAREST BOOKS AROUND DOWNS, INSIDE BUREAUCRACY
47
TABLE 1: OVERLAPS BETWEEN LIBRARY OF CONGRESS AND DEWEY DECIMAL CATALOGUES
American Politics





Neustadt, Presidential Power (1960)…………………………………………………... 15 overlaps out of 50
Dahl, Who Governs? (1961)…………………………………………………………….. 9 overlaps out of 50
Key, Southern Politics (1949)…………………………………………………………... 8 overlaps out of 50
Campbell, Converse, Miller, and Stokes, The American Voter (1960)…………………. 4 overlaps out of 50
Mayhew, Congress: The Electoral Connection (1974)…………………………………. 0 overlaps out of 50
Mean = 7.2
Political Economy





Wilson, The Politics of Regulation (1980)…………………………………………….. 10 overlaps out of 50
Schumpeter, Capitalism, Socialism, and Democracy (1950)…………………………... 4 overlaps out of 50
Hayek, The Road to Serfdom (1944)…………………………………………………....
3 overlaps out of 50
Lindblom, Politics and Markets (1977)………………………………………………...
2 overlaps out of 50
Friedman, Capitalism and Freedom (1962)……………………………………………. 0 overlaps out of 50
Mean = 3.8
Comparative Politics





Huntington, Political Order in Changing Societies (1968)……………………………..
Rae, The Political Consequences of Electoral Laws (1967)……………………………
Duverger, Political Parties (1954)……………………………………………………...
Almond and Verba, The Civic Culture (1963)………………………………………….
Dahl, Polyarchy (1971)…………………………………………………………………
9 overlaps out of 50
6 overlaps out of 50
3 overlaps out of 50
0 overlaps out of 50
0 overlaps out of 50
Mean = 3.6
Political Philosophy and Political Thought





Rawls, A Theory of Justice (1971)……………………………………………………....
Arendt, The Origins of Totalitarianism (1951)…………………………………………
Strauss, Natural Right and History (1953)……………………………………………...
Hartz, The Liberal Tradition in American (1955)………………………………………
Wolin, Politics and Vision (1960)………………………………………………………
8 overlaps out of 50
3 overlaps out of 50
1 overlap out of 50
1 overlap out of 50
0 overlaps out of 50
Mean = 2.6
Formal Theory





Black, The Theory of Committees and Elections (1958)………………………………..
Olson, The Logic of Collective Action (1965)…………………………………………..
Arrow, Social Choice and Individual Values (1951)…………………………………...
Riker, The Theory of Political Coalitions (1962)………………………………………
Axelrod, The Evolution of Cooperation (1984)………………………………………...
4 overlaps out of 50
3 overlaps out of 50
2 overlaps out of 50
2 overlaps out of 50
1 overlap out of 50
Mean = 2.4
International Politics





Waltz, Theory of International Politics (1979)…………………………………………
Bueno de Mesquita and Lalman, War and Reason (1992)……………………………...
Morgenthau, Politics Among Nations (1949)…………………………………………...
Waltz, Man, the State, and War (1959)…………………………………………………
Schelling, The Strategy of Conflict (1960)……………………………………………...
3 overlaps out of 50
2 overlaps out of 50
2 overlaps out of 50
1 overlap out of 50
0 overlaps out of 50
Mean = 1.6
Public Administration and Organization Theory










Wildavsky, The Politics of the Budgetary Process (1964)……………………………..
Gulick and Urwick, Papers on the Science of Administration (1937)……………….…
Cyert and March, A Behavioral Theory of the Firm (1963)……………………………
Downs, Inside Bureaucracy (1967)…………………………………………………….
Lindblom, The Intelligence of Democracy (1965)…………………………………......
Allison, Essence of Decision (1971)……………………………………………………
Simon, Administrative Behavior (1947)………………………………………………..
Kaufman, The Forest Ranger (1960)…………………………………………………...
March and Simon, Organizations (1958)………………………………………………
Selznick, Leadership in Administration (1957)………………………………………...
4 overlaps out of 50
4 overlaps out of 50
2 overlaps out of 50
2 overlaps out of 50
1 overlap out of 50
1 overlap out of 50
0 overlaps out of 50
0 overlaps out of 50
0 overlaps out of 50
0 overlaps out of 50
Mean = 1.4
48
TABLE 2: SUMMARY STATISTICS
Number of Overlaps
0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
Frequency
Percent
10
6
7
5
5
0
1
0
2
2
1
0
0
0
0
1
25 %
15
17.5
12.5
12.5
0
2.5
0
5
5
2.5
0
0
0
0
0
Total: 40
Mode:
Mean:
0 overlaps out of 50
3.0 overlaps out of 50
100.0
Cumulative Percent
25 %
40
57.5
70
82.5
82.5
85
85
90
95
97.5
97.5
97.5
97.5
97.5
100.0
49
APPENDIX
The data collection was conducted in the following way:
1. From the website of each library, we obtained the list of the books located in the adjacent area from
the target book. The list is in the order of call number.
2. For each book in the list, we checked whether the other library has it and, if it does, recorded the call
number in the other system.
3. Now we can remove the books that are not in either library and make a list of 50 neighboring books
of the target book in the Michigan State Library and a similar list for the Northwestern Library.
4. We then compare the two lists and counted how many books are contained in both.
5. This process was conducted intermittently from January 2001 to April 2002.
The identity of books was checked by the following rules:
1. Sometimes the libraries count multiple copies of the same book as different entries. We treated them
as a single entry.
2. We counted different editions as a single entry. Sometimes subtitles are different depending on the
editions; but we still counted them one as long as the main title stays the same. Different editions published by different publishers are also treated as one entry.
3. Different languages (translation) of one book are considered as different entries.
4. Libraries have separate places of keeping some types of books, e.g., large books collection, Africana
section, Business Library, etc. Hence the books that have sequential call numbers are sometimes located in physically far places. But we ignored those local classification decisions and relied only on
call numbers.
5. Exception to the Rule 4 is the books in some branch libraries of Northwestern where Library of Congress numbers are used (e.g., Law Library). Those books are treated as if the Northwestern Library
does not have them.
6. Microforms and government documents have different system of call numbers. We found some books
that are in normal collection in one library and in microform or government documents sections in the
other library. They are ignored.
50
REFERENCES
The Associated Press. 2002. “Congress Opens Investigation of Sept. 11 Attacks to Public.” The New
York Times, September 18, 2002. Accessed via http://www.nytimes.com/aponline/national/APAttacks-Intelligence.
The Associated Press. 2003. “Senators Skeptical on Terrorism Center.” Accessed via
http://www.nytimes.com/aponline/national/AP-Terrorism-Center.
Baer, Robert. 2002. See No Evil: The True Story of a Ground Soldier in the CIA’s War on Terrorism.
New York: Three Rivers Press.
Bamford, James. 2002. “How to (De-)Centralize Intelligence.” The New York Times, November 24,
2002. Accessed via http://www.nytimes.com/2002/11/24/weekinreview/24BAMF.html.
Bendor, Jonathan, and Thomas H. Hammond. 1992. “Rethinking Allison's Models.” American Political
Science Review 86:2 (June): 301-322.
Berkowitz, Bruce D., and Allan E. Goodman. 1989. Strategic Intelligence for American National Security. Princeton, NJ: Princeton University Press.
Berkowitz, Bruce D., and Allen E. Goodman. 2000. Best Truth: Intelligence in the Information Age.
New Haven: Yale University Press.
Betts, Richard K. 1978. “Analysis, War, and Decision: Why Intelligence Failures Are Inevitable.” World
Politics 31: 61-89.
Betts, Richard K. 1980-1981. “Surprise Despite Warning: Why Sudden Attacks Succeed.” Political Science Quarterly 95: 551-572.
Betts, Richard K. 1982. Surprise Attack: Lessons for Defense Planning. Washington, DC: The Brookings Institution.
Brady, Christopher. 1993. “Intelligence Failures: Plus a Change…” Intelligence and National Security
8: 86-96.
Carter, Ashton B. 2001. “The Architecture of Government in the Face of Terrorism.” International Security 26: 5-23.
Carter, Ashton B., and William J. Perry. 1999. “A False Alarm (This Time): Preventive Defense against
Catastrophic Terrorism.” In Ashton B. Carter and William J. Perry, Preventive Defense: A New Security Strategy for America. Washington, DC: The Brookings Institution.
Chan, Steve. 1979. “The Intelligence of Stupidity: Understanding Failures in Strategic Warning.” American Political Science Review 73: 171-180.
Clausewitz, Carl von. 1976. On War. Princeton, NJ: Princeton University Press. Originally published in
1832.
Comfort, Louise K. 2002. “Institutional Re-orientation and Change: Security as a Learning Strategy.”
The Forum (The Berkeley Electronic Press), v.1, issue2, article 4. Accessed via
http://www.bepress.com/forum.
51
Demchak, Chris C. 2002. “Un-Muddling Homeland Security: Design Principles for National Security in
a Complex World.” The Forum (The Berkeley Electronic Press), v.1:2. Accessed via
http://www.bepress.com/forum.
Donaldson, Lex. 2001. The Contingency Theory of Organizations. Thousand Oaks, CA: Sage Publications.
Ford, Harold P. 1998. CIA and the Vietnam Policymakers: Three Episodes, 1962-1968. Langley, VA:
Center for the Study of Intelligence, Central Intelligence Agency.
George, Alexander L. 1972. “The Case for Multiple Advocacy in Making Foreign Policy.” American
Political Science Review 66: 751-785.
George, Alexander L. 1980. Presidential Decisionmaking in Foreign Policy: The Effective Use of Information and Advice. Boulder, CO: Westview Press.
Gertz, Bill. 2002. Breakdown: How America’s Intelligence Failures Led to September 11. Washington,
DC: Regnery Publishing.
Guderian, Heinz. 1952. Panzer Leader. New York: Dutton.
Guggenheim, Ken. 2002. “Probe: U.S. Knew of Jet Terror Plots.” The Associated Press, September 18,
2002. Accessed via
http://wire.ap.org/Apnews/center_package.html?FRONTID=NATIONAL&PACKAGEID=attacksintelligence.
Hammond, Paul Y. 1961. Organizing for Defense: The American Military Establishment in the Twentieth
Century. Princeton, NJ: Princeton University Press.
Hammond, Thomas H. 1986. “Agenda Control, Organizational Structure, and Bureaucratic Politics.”
American Journal of Political Science 30:2 (May): 379-420.
Hammond, Thomas H. 1990. “In Defense of Luther Gulick's ‘Notes on the Theory of Organization.’”
Public Administration 68:2 (Summer): 143-173.
Hammond, Thomas H. 1993. “Toward a General Theory of Hierarchy: Books, Bureaucrats, Basketball
Tournaments, and the Administrative Structure of the Nation-State.” Journal of Public Administration Research and Theory 3:1 (January): 120-145.
Hammond, Thomas H. 1994. “Structure, Strategy, and the Agenda of the Firm.” In Richard P. Rumelt,
Dan E. Schendel, and David J. Teece, eds., Fundamental Issues in Strategy: A Research Agenda.
Boston: Harvard Business School Press.
Hammond, Thomas H. 2003. “Herding Cats in University Hierarchies: The Impact of Formal Structure
on Decision-Making in American Research Universities.” In Ronald G. Ehrenberg, ed., Governing
Academia. Ithaca, NY: Cornell University Press.
Hammond, Thomas H., and Gary J. Miller. 1985. “A Social Choice Perspective on Authority and Expertise in Bureaucracy.” American Journal of Political Science 29:1 (February): 611-638.
Hammond, Thomas H., and Paul A. Thomas. 1989. “The Impossibility of a Neutral Hierarchy.” Journal
of Law, Economics, and Organization 5:1 (Spring): 155-184.
52
Hammond, Thomas H., and Paul A. Thomas. 1990. “Invisible Decisive Coalitions in Large Hierarchies.”
Public Choice 66:2 (August): 101-116.
Handel, Michael. 1980. “Avoiding Political and Technological Surprise in the 1980’s.” In Roy Godson,
ed., Intelligence Requirements for the 1980’s: Analysis and Estimates. Washington, DC: National
Strategy Information Center.
Hinsley, F. H., with E. E. Thomas, C. F. G. Ransom, and R. C. Knight. 1979. British Intelligence in the
Second World War: Its Influence on Strategy and Operations, Vol. One. London: Her Majesty’s Stationery Office.
Hinsley, F. H., with E. E. Thomas, C. F. G. Ransom, and R. C. Knight. 1981. British Intelligence in the
Second World War: Its Influence on Strategy and Operations, Vol. Two. London: Her Majesty’s Stationery Office.
The Insight Team of the Sunday Times. 2002. The Yom Kippur War. New York: Ibooks, Inc.
Johnson, Loch K. 1996. Secret Agencies: U.S. Intelligence in a Hostile World. New Haven: Yale University Press.
Johnston, David. 2003. “C.I.A. Director Will Lead Terror Center.” The New York Times, January 29, 2003.
Accessed via http://www.nytimes.com/2003/01/29/politics/29TERR.
Kam, Ephraim. 1988. Surprise Attack: The Victim’s Perspective. Cambridge, MA: Harvard University
Press.
Kamarck, Elaine C. 2002. “Applying 21st-Century Government to the Challenge of Homeland Security.”
Arlington, VA: The PricewaterhouseCoopers Endowment for the Business of Government. Accessed
via http://endowment.pwcglobal.com.
Kendall, Willmoore. 1949. “The Function of Intelligence.” World Politics 1: 542-552.
Kent, Sherman. 1951. Strategic Intelligence for American World Policy. Princeton, NJ: Princeton University Press.
Kirkpatrick, Lyman B., Jr. 1969. Captains Without Eyes: Intelligence Failures in World War II. New
York: Macmillan.
Knorr, Klaus, and Patrick Morgan. 1983. Strategic Military Surprise: Incentives and Opportunities. New
Brunswick, NJ: Transaction Books.
Knorr, Klaus. 1983. “Strategic Surprise in Four European Wars.” In Klaus Knorr and Patrick Morgan,
eds., Strategic Military Surprise: Incentives and Opportunities. New Brunswick, NJ: Transaction
Books, pp.9-42.
Levite, Ariel. 1987. Intelligence and Strategic Surprises. New York: Columbia University Press.
Lichtblau, Eric. 2003. “Security Officials Considering Plan to Combine Terror Forces.” The New York
Times, January 30, 2003, p.A13.
Loeb, Vernon. 2003. “When Hoarding Secrets Threaten National Security.” The Washington Post, Janu-
53
ary 26, 2003. Accessed via http://www.washingtonpost.com/ac2/wp-dyn/A46664-2003Jan26.
Loeb, Vernon. 2003. “Rumsfeld’s Man on the Intelligence Front.” The Washington Post, February 20,
2003. Accessed via http://washingtonpost.com/ac2/wp-dyn/A52642-2003Feb10.
Lowenthal, Mark M. 2003. Intelligence: From Secrets to Policy, 2nd ed. Washington, DC: CQ Press.
Matthias, Willard C. 2001. America’s Strategic Blunders: Intelligence Analysis and National Security
Policy, 1936-1991. University Park, PA: The Pennsylvania State University Press.
Moore, Robin. 2003. The Hunt for Bin Laden: Task Force Dagger: On the Ground with the Special
Forces in Afghanistan. New York: Random House.
Morgan, Patrick. 1983. “The Opportunity for a Strategic Surprise.” In Klaus Knorr and Patrick Morgan,
eds., Strategic Military Surprise: Incentives and Opportunities. New Brunswick, NJ: Transaction
Books, pp.9-42.
Oren, Michael B. 2002. Six Days of War: June 1967 and the Making of the Modern Middle East. New
York: Oxford University Press.
Persico, Joseph E. 1990. Casey: From the OSS to the CIA. New York: Penguin Books.
Pettee, George S. 1946. The Future of American Secret Intelligence. Washington: Infantry Journal Press.
Pforzheimer, Walter. 1980. “Discussion.” In Roy Godson, ed., Intelligence Requirements for the 1980’s:
Analysis and Estimates. Washington D.C.: National Strategy Information Center.
Platt, Washington. 1957. Strategic Intelligence Production: Basic Principles. New York: Frederick A.
Praeger.
Prange, Gordon W., in collaboration with Donald M. Goldstein and Katherine V. Dillon. 1981. At Dawn
We Slept: The Untold Story of Pearl Harbor. New York: McGraw-Hill.
Prange, Gordon W. with Donald M. Goldstein and Katherine V. Dillon. 1986. Pearl Harbor: The Verdict
of History. New York: Penguin Books.
Riebling, Mark. 2002. Wedge: From Pearl Harbor to 9/11: How the Secret War between the FBI and
CIA Has Endangered National Security. New York: Simon & Schuster.
Risen, James. 2002. “U.S. Failed to Act on Warnings in ’98 of a Plane Attack.” The New York Times,
September 19, 2002. Accessed via http://www.nytimes.com/2002/09/19/politics/19INTE.
Risen, James. 2002. “C.I.A.’s Inquiry on Qaeda Aide Seen As Flawed.” The New York Times, September 23, 2002. Accessed via http://www.nytimes.com/2002/09/23/national/23INTE.
Roberts, Sam. 2002. “A Catastrophic Failure To Think the Unthinkable.” The New York Times, November 20, 2002, p.B9.
Schmidt, Susan. 2002. “Lawyers for FBI Faulted In Search.” The Washington Post, September 25, 2002.
Accessed via http://www.washingtonpost.com/wp-dyn/articles/A62612-2002Sep24.
Shenon, Philip. 2002. “Early Warnings on Moussaoui Are Detailed.” The New York Times, October 18,
2002.
54
Tepker, Harry F. 2002. “The USA Patriot Act.” Extensions (A Journal of the Carl Albert Congressional
Research and Studies Center). Fall 2002: 9-13.
Thomas, Evan. 2002. “‘The Age of Sacred Terror’: Don’t Bother Me.” The New York Times, November
3, 2002. Accessed via http://www.nytimes.com/2002/11/03/books/review/03THOMAST.html.
Treverton, Gregory F. 2001. Reshaping National Intelligence for an Age of Information. New York:
Cambridge University Press.
United States Congress. 2002. “Joint Inquiry Staff Statement, Part I.” Eleanor Hill, Staff Director, Joint
Inquiry Staff, September 18, 2002. Accessed via
http://www.fas.org/irp/congress/2002_hr/091802hill.html.
Washington Post Staff Writers. 2002. “Lost Chance on Terrorists Cited.” The Washington Post, October
2, 2002. Accessed via http://www.washingtonpost.com/ac2/wp-dyn/A30043-2002Oct1.
Wirtz, James J. 1991. The Tet Offensive: Intelligence Failure in War. Ithaca, NY: Cornell University
Press.
Wise, Charles R. 2002. “Reorganizing the Federal Government for Homeland Security: Congress Attempts to Create a New Department.” Extensions (A Journal of the Carl Albert Congressional Research and Studies Center). Fall 2002: 14-19.
Wohlstetter, Roberta. 1962. Pearl Harbor: Warning and Decision. Stanford: Stanford University Press.
Zegart, Amy B. 2003. Flawed by Design: The Evolution of the CIA, JCS, and NSC. Stanford, CA: Stanford University Press.
Download