Malware detection: Where the White House went wrong
By Nir Polak
Espionage isn’t what it used to be. While Hollywood still loves to produce thrillers about secret agents in disguise, the news tells a different story about how intelligence leaks out of the U.S. government. Often, its starts with an incoming email that looks innocuous. That’s exactly what happened to the White House recently, when a group of Russian hackers launched a sustained attack armed only with a phishing email and a piece of malware.
Researchers at Kaspersky Lab say that the so-called CozyDuke advanced persistent threat (APT) started in October. In April, the White House acknowledged that the hackers had obtained some highly prized intelligence – President Obama’s schedule. The way this happened highlights why the government and the private sector both need to change their approaches to cybersecurity.
The problem with malware detection
While targets at the White House were downloading files like “Office Monkeys LOL Video.zip” and watching funny clips, CozyDuke was running a malicious executable file on the government system. The hackers could detect the anti-virus software on their targets’ computers and mobile devices, determine how to evade detection and then take ownership of the system. Unsuspecting employees passed the video around to each other, but the people laughing hardest were probably the hackers, who were able to access and share files they found on the network.
Unfortunately, attempts to detect the kind of malware the CozyDuke hackers used are more or less futile. Eighty-two percent of malware disappears within an hour, and within that narrow window, hackers can find access points they use for months. While anti-malware tools are busy looking for smoking guns that have vanished, the criminals themselves move around undetected, thanks to the stolen credentials of legitimate users.
So how can government agencies, private organizations and public companies detect attackers after the initial recon and compromise – the malware – is gone? They need to focus credential behaviors and access characteristics in the middle of the attack chain, where hackers spend the most time.
User behavior analysis and attack detection
Once hackers nab the credentials of a trusted user, they can spend up to 200 days establishing a presence on the target network, escalating privileges, gathering intelligence, and moving laterally in pursuit of additional data before ending their missions. Certainly, organizations can do more to stop these activities before they start. For example, there was an obvious failure in user education at the White House, where the initial mark opened the CozyDuke .zip file, and then subsequent users repeated the mistake. However, hackers create an estimated 80,000 new computer viruses a day, accompanied by many new ways to trick users into opening them. The newest viruses are now capable of evading malware sandboxing solutions. In addition, there’s no amount of education that can guarantee 100 percent of workers will resist the temptation of a funny video file they should know better than to open.
When users inevitably let hackers in by opening malicious files, organizations need a mechanism for detection that is effective beyond the point of initial compromise. How can IT teams spot cybercriminals in disguise? The same way Hollywood reveals spies in so many TV and movie plots: through some anomaly in their behavior that separates them from everyone else and proves they aren’t who they say they are.
In cyber security, that anomaly detection comes from the use of machine-learning and proprietary behavior analytics that learn a user’s normal credential behaviors so that what’s abnormal stands out. When enough variations from form normal are seen, someone needs to pick up the phone and be able to ask, “Was that you accessing human resources data using the VPN from Eastern Europe, switching credentials, and accessing a data base in a network zone neither you nor any of your peers have accessed before?” Any one irregularity could be a first indication that a hacker has begun to move around inside the network under the cover of stolen credentials, but in a dynamic business environment it takes a preponderance of evidence to know that an account has been taken over.
Beyond perimeter protection
When the cyber security profession came of age in the 1990s, attack prevention at the perimeter was the best strategy organizations had against hackers. But criminals have updated their methods in the decades since, and government and industry must do the same. While attack prevention remains an important goal, IT teams must focus more on detection and containment of hacks in progress. This is perhaps the most important lesson the CozyDuke incident teaches the market: to protect a larger attack surface, cyber security professionals need to patrol a larger area, one that extends from the perimeter into the core of the network.
Nir Polak is CEO and co-founder of Exabeam.