April 2017 Digital Edition
March 2017 Digital Edition
Feb. 2017 Digital Edition
January 2017 Digital Edition
Nov/Dec 2016 Digital Edition
Oct 2016 Digital Edition
The importance of increasing the use of Red Teams
The intelligence agencies and military branches of the U.S. Government need to operate live “Red Teams” for use against physical structures and Red Teams that are separate from the planners for the war-gaming phase of planning operations. These Red Teams need to be external to the agency or organization they support. Increasing the use of Red Teams is important now more than ever due to the dynamics of combating terrorism.
Red Teams typically are small groups of persons from individual agencies or organizations that test the effectiveness of U.S. infrastructure, operations, capabilities, plans and concepts. The Red Team attempts to observe the U.S. agency or organization’s infrastructure, operation, capability, plan or concept from the enemy’s point of view in order to identify vulnerabilities. Identifying such flaws, and correcting them before the enemy would have a chance to act, would ultimately improve security.
Three examples help demonstrate the potential benefits that Red Teams provide. The first example is the U.S. Navy’s “Red Cell.” In 1984, the Deputy Chief of Naval Operations, Vice Admiral James Lyons, Jr., tasked Navy SEAL Commander Richard Marcinko with building and leading a team that would test the security measures of the U.S. Navy. Commander Marcinko formed a team, composed mostly of SEALs, which he led to penetrate U.S. naval bases and expose the security flaws of those bases. Red Cell had the backing and oversight of senior leadership to conduct the “attacks” and it coordinated to ensure that base security did not inadvertently mistake the Red Cell for an actual enemy assault. Red Cell successfully conducted these exercises at several naval bases. During the pre-mission reconnaissance of the U.S. Navy’s Trident and Ohio-class nuclear submarine home base in New London, CT, Red Cell found numerous lapses in security. These included: no true front gate, an ordnance facility protected only by a single chain link fence and train tracks that ran through the middle of the base, all making it easy for an enemy to infiltrate the base and gather information.
Red Cell also was able to rent a small boat, fly a Soviet flag from it and get close enough to the base to take photographs of classified features of the submarines. This occurred in 1985, well before the fall of the Soviet Union. Even after Red Cell informed the base security of the exact time that their attack would take place, they were successful in breaching the base and raiding a particular building, remaining undetected all the while.
The second example involves the Federal Aviation Administration (FAA). The FAA had a Red Team that was an elite group of security agents who traveled to major airports within the U.S. and abroad to conduct covert penetration testing of airport security systems, in order to provide the FAA with realistic data on the state of aviation security.
The team found serious security weaknesses at Boston’s Logan International Airport -- the same airport from which two of the four hijacked planes used in the 9/11 attacks originated. The agents who were part of the Red Team believed that their findings were covered up by FAA officials. Perhaps, if their identified security flaws received more attention, the events of 9/11 would not have occurred.
The third example deals with the realm of cyber-security. The National Security Agency (NSA) has a Red Team that attempts to hack in to Department of Defense (DoD) computer systems in order to identify gaps that need to be secured, before the enemy has a chance to exploit them. The NSA Red Team is separate from the rest of NSA and does not give advanced notice of its attempts to breach DoD entities, but it does leave a calling card for any networks it is able to breach, informing the network administrator of the security compromises that need to be fixed. A Red Team member says that majority of its personnel are military and civilian government employees, as well as a small cadre of contractors. The military guys mainly conduct the ops (the actual breaking and entering), while the civilians and contractors mainly write code to support their endeavors. Their goal is not to damage anything, but to identify the security flaws.
One benefit of Red Team activities is that they confront preconceived judgment by demonstration. They also serve to clarify the true problem-state that planners are attempting to mitigate. Additionally, a more accurate understanding can be gained about how sensitive information is viewed from an outside point-of-view, as well as highlight exploitable patterns and cases of unnecessary preconceived notions, with regard to controls and planning.
Many times, situations turn out differently than one anticipates. More security problems can be identified through demonstration by both the live Red Teams and the war-gaming Red Teams. The U.S. Navy’s Red Cell identified many security shortfalls by demonstrating that penetrating a secured base was possible. It is easier to identify exploitable patterns from the outside looking in than from the inside looking out.
Red Teams constraints include operational, political and safety limitations that need to be considered. Operationally, senior leadership needs to approve Red Team activities. For example, Commander Marcinko had permission from the Deputy Chief of Naval Operations. Having this approval protected Commander Marcinko, a Navy O-5, from being punished by the base commanders, typically a Navy Captain (O-6). Operational coordination needs to be conducted to minimize confusion and keep everyone safe.
A political limitation is that any change of policy usually takes a long time and might exceed the scope of the policy-makers’ influence. “Policy-makers don’t always have the required range of response options recommended by a Red Team,” one observer noted, as some Red Team suggestions may be too controversial.
Safety is another limitation. Cooperation would be required from security officers for some parts of the exercise. Security at least has to agree to allow an exercise to take place on its platform to avoid mistaking a Red Team member for an actual adversary.
The other negative aspect to Red Teams is that they cost money. Red Team members are taken away from other teams that may have more priority and need to be provided with additional training. They need the means to conduct their red operations, including specialized gear or computers, and additional means of transportation -- anything an enemy would use to seek information on blue forces. However, the cost of not having Red Teams could prove be even greater.
The FAA example above shows how the 9/11 attacks could have been prevented had the FAA Red Team’s advice been implemented to improve the security. The cost of training and maintaining a Red Team is much less than the cost of recovering from an enemy attack.
How should the U.S. Government promote the use of Red Teams? By implementing a program developed around proper education. Historically, many agencies and organizations have used some form of a Red Team, but “there has not been a formal educational or training program nor common Red Teaming doctrine, procedures, methodologies, or framework in the past,” observed an Army report in 2008. This education and training should be specific to the agency or organization that the Red Team would be working against (or for, actually). The U.S. Army has a course, the Army Red Team Leaders Course (RTLC), run by the University of Foreign Military and Cultural Studies (UFMCS) at Fort Leavenworth in Kansas. RTLC is a graduate-level course designed to effectively predict change, reduce uncertainty and improve operational decisions. The aim of the program is for its students to look at issues from the adversary’s perspective, with the goal of recognizing alternative strategies. A course like this should be available for all U.S. agencies or organizations that might require a Red Team. The U.S. Army has approved the addition of Red Teams to nearly every echelon of command. It is also adding the concept to doctrine and refining techniques and procedures. Other U.S. Government organizations need to do the same.
While there are different ways of to use Red Teams, they should consist of experts in their field -- the U.S. Navy’s Red Cell was comprised mostly of Navy SEALs, the FAA Red Team was comprised primarily of FAA security agents and the NSA Red Team was comprised mostly of cyber intelligence analysts. A Red Team of experts is more likely to identify security gaps that might be missed by someone who is not as experienced in that field. In each example, the Red Team was separate from the internal base, airport or computer security elements. This separation is essential for a Red Team to be successful.
When units in the military make operation plans, part of the process usually includes “war-gaming.” This typically is the only time when the planners will remove their blue hats and put on their red hats (figuratively), to sit and think about how the enemy may perceive their plan. The planners will try to think of how their plan will unfold from the enemy’s point-of-view. Thinking or discussing this generally is the extent of their effort to identify problems with the plan. Having a dedicated Red Team that is separate from those doing the planning will enable the former to focus its entire effort on ways to defeat the plans. As mentioned earlier, it is easier for one to identify exploitable patterns from the outside looking in than from the inside looking out. The blue hats should wear only blue, the red hats should wear only red. This enables each element to stay focused on its objective.
In addition, Red Teams need senior leadership support, and should incorporate diversity. If a Red Team does not have the support of senior leadership, the efforts of the Red Team will not improve the organization. Unless senior leadership dictates that the senior blue person learns from the Red Team, the security gaps that the Red Team identifies will not be corrected. We see the results from not doing this from the FAA Red Team example. The Red Team, perceived as an annoyance to the blue planners or security, will be ignored unless senior leadership demands that the Red Team’s advice be implemented.
Having diversity within the Red Team also is a must; however, this should not counter the earlier point about Red Team members being experts in their field. Diversity within Red Teams will expand the ideas that the teams create; they will have a much broader span of creative thoughts.
International terrorist activities against the U.S. and U.S. allies have increased since 9/11. Therefore, the need to increase the use of Red Teams is more important than ever, in order to counter the enemy’s creative ways. The more creative one’s enemy is, the more creative we must be to identify any way the enemy may attack us, before actually being attacked.
Agencies and organizations in the U.S. need to promote the use of Red Teams. Operating live Red Teams for use against physical structures and cyber-space -- and Red Teams that are separate from the planners for the war-gaming phase of planning operations -- are crucial for successfully countering security threats. The need for increasing the use of Red Teams is more important than ever.
Phil Mayr recently transitioned from the Marine Corps and is in graduate school at Johns Hopkins University. He can be reached at: