username: password:

A Brief Overview

The United States is a society totally dependent on interlocking networks and nodes for communications, transportation, energy transmission, financial transactions, and essential government and public services. Disruption of key nodes by terrorists could cause havoc, untold expense, and perhaps even mass deaths. We are, in the jargon of the trade, a ‘target-rich environment.’

Senator Patrick Leahy (1990)

Critical infrastructure . . . Ask the average American to define the term and the response is likely to be a blank stare or bemused shrug. But mention water supplies, the electrical grid, banking networks, air traffic control or oil pipelines—and understanding dawns.

The term critical infrastructure—or CI—is a relatively new invention, but services and systems that are important to human activity have been with us for centuries. So long, in fact, that we tend to take them for granted, until something or someone disrupts their operation. The usual suspect is Mother Nature, who delivers her message in the form of fires, floods, hurricanes, fallen trees, curious squirrels, earthquakes, and other forces. Anyone whose flight out has been cancelled due to bad weather, or who has driven miles out of the way to avoid a washed-out bridge, knows that nature has its ways of reminding us of who’s boss.

Simple wear-and-tear also poses a constant challenge to the health and well-being of critical infrastructure. The problem is not a small one: nearly four million miles of roadway alone crisscross the country, not to mention the existence of thousands of dams, airports, power stations and bridges.

Other problems that threaten critical infrastructure include technological obsolescence, poor maintenance, accidents, or that perennial peril: human error. As one infrastructure expert notes with a touch of humor: “There are more communications systems taken down per day by the backhoe than by anybody else.”

Last, but far from least, intentional destruction of infrastructure has always been an issue, albeit a fairly low-key concern during peacetime, until recently when, post-September 11, it jumped to the top of the list of federal priorities. It is this last category of potential CI disruption that has prompted the most recent flurry of discussion about the safety and security of the nation’s infrastructure.


The strategy of targeting of an adversary’s vital systems or services for destruction by sabotage or outright attack is far from novel. History offers abundant examples of stratagems designed either to protect one’s own critical infrastructure from peril or to destroy that belonging to an adversary, especially during times of war. One well-known example from World War II is the “Casablanca directive,” which laid out plans for an Allied strategic bombing campaign designed to destroy the German military and industrial and economic system and simultaneously undermine the will of the German people to fight. Allied bombing attacks on the country’s railway system after D-Day in 1944 decimated approximately two-thirds of German rolling stock, slowing delivery of goods to the point that the nation’s economy was pushed towards collapse.

The main goal of such targeted attacks, of course, is to destroy—or at least cripple—an adversary’s ability to fight, to resupply itself, to communicate, or to function normally. The anthrax scare that shut down a U.S. Senate Office Building in late 2001, for example, appears to have been an attempt to hobble critical infrastructure—in this case, the operations of government.

Destruction isn’t always limited to the “enemy’s” critical infrastructure. The history of warfare offers many examples of intentional destruction of components of one’s own infrastructure—blowing up bridges, for example, to short-circuit the advance of the enemy. Of course, military history also touts the exploits of outfits specially trained to repair or replace destroyed structures on demand. The U.S. Army’s Corps of Engineers excels at throwing bridges across rivers virtually overnight. The Corps was so adept at cobbling together infrastructure during World War II that General Douglas MacArthur dubbed the conflict “an engineer’s war.”


Naturally, there’s a flipside to all of these potential dangers and destructive forces: It’s the defense of critical infrastructure, or what is today called critical infrastructure protection or CIP.

Critical infrastructure protection includes wide-ranging efforts to fortify, insulate and, if need be, quickly repair, rebuild or replace vital systems and services. A few conventional examples include security checkpoints, system redundancies, regularly scheduled back-ups, and preventive maintenance. Even the “Miss Utility” program of areas like Washington, D.C. is an exercise in CIP: the program is designed to keep homeowners from hitting gas, electrical and water lines when excavating for the family swimming pool.

Beginning in the 1980s, the growing use of computers in business and government—combined with easy access to the Internet—added a fresh dimension to CIP. Quietly but quickly, much of the nation’s most basic infrastructure came to depend on computers to control many essential functions. Suddenly, not only was the physical infrastructure itself vulnerable to conventional methods of destruction such as explosives, but the overlay of high-tech networks that controlled them was also a potential target of would-be terrorists, criminals, disaffected employees, as well as bored teenagers equipped with a laptop and leisure-time.


Other than the occasional inconvenience caused by an electrical black-out or Internet virus, the average American rarely has had reason to think about, much less fret over, the perils of increasing interconnectedness among systems. Or to worry about the degree to which the nation’s economy, society and government have come to depend upon complex communications, power and transportation networks.

Those on the “front lines” of protecting the country, however, recognize a potential Achilles heel in the nation’s otherwise formidable defenses. And not simply in the wake of the drama of September 11. Long before the attacks of that day, events hinted at possible dangers. During the 1970s and 1980s, an escalating series of attacks on Americans overseas, as well as plane hijackings and bombings of U.S. bases and embassies abroad, suggested that the nation might be vulnerable to similar attacks inside its borders. Congress, the White House, Department of Defense, Federal Emergency Management Agency, National Security Council, the FBI, and a few others began to document the issue. Hearings, reports and studies all underscored the ease with which a creative or determined terrorist could disrupt or destroy one or more of the “critical nodes” that control many of the nation’s networks.

In spite of mounting evidence of the nation’s vulnerabilities, federal-level action was limited. One formidable stumbling block was the fact that the vast majority of the nation’s infrastructure is not owned by the federal government, but by the private sector. Any serious effort to address the protection of infrastructure requires the commitment and cooperation of thousands of independent, profit-making enterprises, many of which have their own ideas about how to balance safety and security with the bottom line.

Other obstacles to action included the “it can’t—or won’t—happen here” mindset that tends to operate in the absence of emergency. The bias toward inaction was further reinforced by a fear (well-founded or not) that talking about vulnerabilities would put ideas into the heads of mischief-makers. Finally, until the Cold War ended, the specter of nuclear annihilation played the role of 800-pound gorilla in national security debates and discussions, overshadowing all but the most dramatic events. A regional power outage or disruption of the telephone system could be viewed as mere nuisances when compared with the prospect of a deadly mushroom cloud.


In 1991, the official dissolution of the Soviet Union brought an end to the four-decade long Cold War and opened a new chapter in global politics and power struggles. As the sole superpower, the United States became a primary target for organizations and individuals who resented or distrusted its economic power, global political strength and foreign and military policies. In 1993, the bombing of the World Trade Center in New York City by Muslim extremists brought home the reality that the nation was not impervious to strikes within its borders by those who wished to do it harm. Moreover, attacks on the U.S. were not limited to traditional nation states like the U.S.S.R., but could readily originate with small groups of terrorists tied mostly by ideological bonds or by individuals acting alone. The old paradigm of a Cold War bi-polar world no longer resonated.

Two years after the World Trade Center attack, another bombing dramatically underscored the magnitude of the country’s vulnerability. On April 19, 1995, a Ryder rental truck filled with explosives destroyed much of the Alfred P. Murrah Federal Building in Oklahoma City. The deliberate destruction of an unassuming federal building, located far outside the “nerve center” of Washington, D.C., set off a chain reaction of events and effects that reverberated well beyond the Midwest. Government officials realized that the toll on human life and the building itself was compounded by the crippling of key functions. From the loss of a payroll department to the devastation of a wing of the FBI, damage from the explosion was felt by other government agencies, employees and the private sector from coast-to-coast.


In the wake of the Oklahoma City bombing, then-President William J. Clinton signed Presidential Decision Directive 39 (PDD 39), establishing the Critical Infrastructure Working Group (CIWG), an interagency panel to assess what could be done to minimize or prevent similar events in the future. Among the CIWG’s recommendations was a call for a presidentially-appointed commission to explore in greater depth the relationship between the country’s critical infrastructures and national security.

In response to the CIWG’s report, Clinton issued Executive Order 13010, creating the President’s Commission on Critical Infrastructure Protection (PCCIP). Chaired by General Robert T. (Tom) Marsh, the PCCIP included senior-level government officials, private industry executives, and leaders from the academic community. The public-private make-up of the Commission was designed to ensure that infrastructure owners and government agencies would work together to identify problems and propose solutions.

After fifteen months of study, the Commission published its findings in October 1997 in a report titled Critical Foundations. Among other things, the report systematically outlined the vulnerabilities associated with each of eight sectors, including the dangers posed by growing interdependency among networks. Although the Commission’s tone overall was moderate, the group painted a stark picture of widespread and, in many cases, unwitting gaps in security.

As a follow-up to the Commission’s report, Clinton issued Presidential Decision Directive 63 (PDD 63). Implementation of PDD 63 included creation of various new departments within existing federal agencies, among them the Critical Infrastructure Assurance Office (CIAO) within the Department of Commerce and the National Infrastructure Protection Center (NIPC) within the FBI. PDD 63 also established a new type of public/private forum called Information Sharing and Analysis Centers (ISACs). The ISACs were designed to promote dialogue within critical infrastructure sectors (e.g. banking, transportation, power) by encouraging sector members to share information about potential existing vulnerabilities.


While implementation of PDD 63 got underway, other high-level panels also began to address aspects of the nation’s approach to homeland security. In October of 1998, Congress directed the Department of Defense (DOD) to create a panel to study the topic, in consultation with the U.S. Attorney General, the Department of Energy, the Department of Health and Human Services, and the Federal Emergency Management Agency (FEMA). In April of 1999, DOD Secretary William Cohen named Virginia Governor James Gilmore to head the blue-ribbon commission. The panel’s primary objective was “to assess domestic response capabilities for terrorism involving weapons of mass destruction (WMD).”

In the first of four annual reports to Congress, the Gilmore Commission declared that the possibility that terrorists will use “weapons of mass destruction (WMD) in this country to kill and injure Americans, including those responsible for protecting and saving lives, presents a genuine threat to the United States.” The report further noted, that “should an attack occur, we must be confident that local, state, and Federal authorities are well prepared to respond and to address the consequences of the entire spectrum of violent acts.” The Commission’s second report underscored the lack of a national strategy to combat terrorism, expressed concern about fragmentation of authority at the federal level, and highlighted insufficient coordination and attention to State and Local actions to combat terrorism. The report recommended the creation of a national office for combating terrorism within the executive department, something that the earlier PCCIP had also considered—and rejected—having gauged that, in 1997, the concept was unlikely to win political support without a galvanizing incident.

Another panel—the United States Commission on National Security/ 21st Century—issued its own set of recommendations in February 2001 in a report titled Seeking a National Strategy. Chaired by Senators Gary Hart (D) and Warren Rudman (R), the so-called Hart-Rudman Commission echoed the earlier reports, speaking anxiously of the inevitability of a major terrorist act on U.S. soil and of the nation’s weak ability to prevent or respond to such an attack—concerns which were validated just eight months later on September 11. Although neither Hart-Rudman nor Gilmore focused specifically on critical infrastructure, the reports nonetheless reinforced the basic message of the PCCIP: the time for action was now, not later.


Following the September 11 attacks, the Hart-Rudman Commission’s recommendations received bipartisan support and were largely adopted by the Administration, Congress, and the judiciary. Among other things, the Commission called for the creation of a new federal agency, to be named the National Homeland Security Agency (NHSA). The new organization’s raison d’etre would be “to consolidate and refine the missions of the nearly two dozen disparate departments and agencies that have a role in U.S. homeland security today.”

While agreeing with Hart-Rudman that a central coordinating point for “homeland security” was called for, President George W. Bush initially chose to establish the function within the White House, under the title of Office of Homeland Security (OHS). Political pressures, however, led to its elevation to Cabinet-level status as the Department of Homeland Security (DHS) in November 2002. OHS director, former Pennsylvania Lieutenant Governor Tom Ridge, was named the nation’s first Secretary of Homeland Security. Together with many other security-related functions, critical infrastructure protection was folded into the sprawling new department. Today, the Information Analysis and Infrastructure Protection Directorate within DHS handles critical infrastructure policy issues.

Please visit our Timeline for a detailed chronology of events in the history of critical infrastructure protection.