9/11, A Decade Later – A better paradigm emerges for cyber security
The events of 9/11 illustrate in tragic detail the shortcomings of a black list approach to national security. The so-called black list model seeks to identify threats before they can manifest. The drawback, of course, is it cannot possibly defend well against every foreseeable threat, and is powerless against the unanticipated.
The counterpoint to the black list is the white list approach, which owns singular authority to define and grant all permissible freedoms. By permitting only pre-approved activities, it needn’t monitor endlessly for bad behavior and provides a stiffer defense against unimagined attacks.
While the white list is an impractical approach in the real world, it has applications in the virtual world of cyber security, and the tools to enable it have evolved quickly since 9/11. A decade ago, the rise of mobile and remote computing was already putting more laptops, data, applications and users beyond the security of the traditional network firewall. As the digital world became more mobile, cyber attacks grew more sophisticated, as well as more ambitious.
According to the NSA, 250,000 cyber attacks are leveled on Department of Defense information systems each year. And, as headlines from the last few months attest, hackers are more boldly targeting large commercial networks from Sony to PBS to CitiGroup. Further, coverage of the recent cyber attacks on Google and defense contractor Lockheed Martin strongly suggested an active role by foreign powers. These trends are portentous and, although our digital infrastructure remains largely uncompromised today, it is no longer enough to remain complacent to such threats in a post-9/11 world.
Many of these attacks could be hindered and even eliminated through a white list approach to cyber security, wherein the identity of all individuals, organizations and devices are proven on the network -- before any transaction occurs between them. Within the IT industry, this is known as trusted computing.
The foundation of trusted computing shifts the focus of digital security from the user to the device. It favors hardware-based device identification to ensure only known computers, applications and users gain access to information and resources on a private network. Far from being a new or untested modality, device identification has long provided strong network security for cellular networks and cable providers -- both of which have virtually eliminated the once frequent illegitimate use and theft of their services.
Ten years ago, trusted computing would have been impossible to implement on data networks given the technologies available at the time. (And, indeed, conventional user-based security tools of today -- such as USB tokens and smart cards -- cannot achieve it by themselves.) That began to change in 2003, when IT leaders, including AMD, Hewlett-Packard, IBM, Intel Corp., Microsoft, Sony Corp., Sun Microsystems, and Wave Systems, assembled to form the Trusted Computing Group (TCG). Shortly thereafter, the group released its open standard for the first interoperable root of trust for computing: the trusted platform module (TPM).
The TPM is a cryptographic security chip integrated into a computer’s motherboard that effectively converts the laptop itself into a security token. It enables IT managers to remotely create, sign and store authentication keys within a PC’s hardware, strongly binding the identity of the machine and its user to the device. Further, because keys are stored and protected within embedded hardware, they cannot be changed or stolen by malware.
More recently, the TCG expanded its open standards to include another root of trust for computing: the self-encrypting hard drive (SEDs). Under the TCG’s Opal standard, SEDs comprise a protected and independent architecture. They include their own processor, memory and RAM, and impose very strict limits on the code that can run within their architecture. SEDs provide a hardware-based container to securely house encryption keys and user access credentials. Since the encryption key never leaves the drive’s protected hardware boundary, it is impossible to steal, and immune to traditional software attacks.
The TCG’s component members have done more than develop interoperability standards for TPMs and SEDs over the past decade. They’ve actively embedded these technologies into their enterprise-class offerings. To date, TPMs are onboard a majority, if not all, enterprise-class laptops and PCs, and SEDs are available as from most leading PC OEMs.
Active management and use of these technologies is spreading quickly. The commercial sector has led the adoption curve for trusted computing, and the use of TPM and SEDs has seen more frequent use in broader deployments. These include deployments from leading companies across the automotive, healthcare, chemical, energy and professional services industries spanning tens of thousands of seats.
Government enterprises are also contributing increasing momentum behind trusted computing. For years, the U.S. Army has required every new PC procured in support of its enterprise to come equipped with a TPM; and, in 2007, virtually the entire Department of Defense followed suit. In addition, the National Security Agency’s High Assurance Platform (HAP) initiative has actively defined a framework for development of secure computing platforms using commercially available Trusted Computing technologies. Further, the agency has taken a leadership role by hosting the second annual Trusted Computing Conference in Orlando this month.
More recently, a few months following President Obama’s inauguration, he identified our digital infrastructure as a strategic national asset, and plainly stated that America's economic prosperity in the 21st century depended on strong cyber security.
“We count on computer networks to deliver our oil and gas, our power and our water,” Obama said. “We rely on them for public transportation and air traffic control. Yet we know that cyber intruders have probed our electrical grid and that in other countries cyber attacks have plunged entire cities into darkness.”
Improving cyber security was among Obama’s first executive actions, and recently manifested in the administration’s National Strategy for Trusted Identities in Cyberspace (NSTIC) initiative. NSTIC’s central vision is an online environment where individuals and organizations follow well-defined standards to obtain and authenticate their digital identities, a position that effectively signals that the merits of open standards hardware security have been recognized by the government.
Amidst all these changes of the past decade, one thing remains the same: Both terrorists and hackers can suffer 100 defeats, and yet appear to have won after a single success. The key difference is that, unlike the real world, the virtual world provides the means to trust the identity of all users and devices within a system, and to guarantee that only those who follow the rules will enjoy the system’s freedoms. The tools for trusted computing are widely deployed today, and now with critical mass can support widespread application to achieve this remarkable new digital society.
Steven Sprague is president and CEO of Wave Systems Corp. He can be reached at: