Righteous Malware and EternalBlue: a cybersecurity prophecy come true

In 2017 the world experienced two very costly outbreaks of malicious code or malware: WannaCry and NotPetya. Collectively, they caused IT mayhem for thousands of organizations, imposing well over a billion US dollars in “respond and repair” costs (the impact of NotPetya on just three companies alone — Fedex, Maersk, and Merck — totaled at least US$875 million). These cyberattacks made headlines for their negative impact on several parts of the world’s critical infrastructure: the pharmaceutical supply chain, the delivery of medical services, and international shipping. In this article I argue that all of this could have been avoided if the US government had heeded the advice of cybersecurity experts, especially those who conduct anti-malware research.

This cyberattack made possible in part by…

The US government, along with several of its allies, has publicly attributed the WannaCry and NotPetya malware attacks to Russia and North Korea respectively. What the US government has not been very public about is its own role in enabling these attacks. Yet there is no doubt that, to borrow a phrase from public television: “these malicious programs were made possible in part by the US government”.

How is that possible? Well, in order to rapidly spread themselves from system to system, WannaCry and Not-Petya used code known as the EternalBlue SMB exploit. This code was developed by the US National Security Agency (NSA) to compromise networked computer systems for what some people would consider righteous purposes (such as spying on the enemies of America).

What made the EternalBlue exploit possible was a vulnerability within something called Microsoft Server Message Block or SMB; this is a file sharing protocol used by Microsoft Windows. the NSA found this vulnerability more than five years ago but did not tell Microsoft about it. Why? Because it gave the NSA an exceptional ability to gain unauthorized access to other people’s computer systems (for justified or “righteous” purposes).

Unfortunately, but quite predictably, the NSA lost control of this code. When it realized that this had happened, the NSA finally disclosed the vulnerability to Microsoft and the software giant created a patch for it. Unfortunately, but again quite predictability, some people outside of the US government began to exploit the vulnerability for their own ends before all of the vulnerable systems had been patched.

(The wholesale patching of computers within large organizations is extremely challenging — for a start, the patches have to be tested so to make sure they don’t break other parts of the systems). In effect, the release of the WannaCry and NotPetya malware via the EternalBlue exploit became a very costly global test of system patching practices.)

Tragically, all of this could have been avoided. Various versions of this scenario were prophesied long ago, by the first humans who worked with computer viruses.

The “good virus” debacle

Back in the 1980s, many people were fascinated by the potential use of self-replicating code — computer viruses and worms — for beneficial purposes. A classic example is a hypothetical maintenance virus spreading from machine to machine to discover and fix vulnerabilities in those machines as it progresses (this very different from security updates pushed to your system by a vendor, over which you have control, and for which you have given permission).

The theory of “the good virus” was expertly articulated by Dr. Fred Cohen in his 1991 article: A Case for Benevolent Viruses. However, by that time several practical lessons had been learned — in some cases at great expense in terms of loss data and system downtime — about computer viruses and worms whose programmers claimed to have purely benevolent intentions. Here are three such learnings:

  1. It is practically impossible to write code that works properly on all the systems to which it may spread (resulting in a “good virus” unintentionally doing bad things).
  2. It is practically impossible to prevent viral code from falling into the wrong hands (it may escape the lab, get stolen from the lab, be part of a test deployment that goes wrong, be intentionally deployed, and so on).
  3. There are ethically challenged persons who will exploit, for their own purposes, any code on which they get their hands (remember: when your malware gets onto my computer I get a free copy of your work that I can reverse engineer, revise, and re-release).

In the 1990s the people working to prevent computer virus infections — the growing ranks of anti-virus (AV) researchers, both academic and commercial— increasingly rejected the idea of the good virus. In 1994, Vesselin Bontchev, then a research associate at the Virus Test Center of the University of Hamburg, polled the AV community on this topic and compiled the responses in his landmark article: “Are ‘Good’ Computer Viruses Still a Bad Idea?” The following table, drawn from that article, summarizes the reasons why “good computer” viruses are a bad idea:

Sadly, the sage advice conveyed by Bontchev’s article was not sufficiently taken to heart in some quarters (like parts of the US military and the NSA). So, 20 years later, when I had a chance to speak at the 6th International Conference on Cyber Conflict (CyCon) I made sure that I included the above table*.

The paper was titled: Malware is Called Malicious for a Reason: The Risks of Weaponizing Code (PDF). My co-author on the paper was Andrew Lee and we have both spent time in the trenches fighting malicious code. We were well aware that antivirus researchers had made repeated public warnings about the risks of creating and deploying “good” malware. So we decided to coin a new term to describe this type of code: righteous malware — software deployed with intent to perform an unauthorized process that negatively impacts the confidentiality, integrity, or availability of an information system, to the potential advantage of a party to a conflict or supporter of a cause.

Righteous malware, WannaCry, and NotPetya

Needless to say, righteous malware is inherently problematic, if not entirely oxymoronic. A cause that might seem good to you could strike someone else as an existential threat; are you defending your nation or engaging in imperialist aggression? And as soon as you take the position that it is okay — even in limited circumstances — to run your code on someone else’s system without their permission, you implicitly give them permission to do the same. And if they discover the code you injected, it is now theirs to play with.

Furthermore, there is a multi-billion dollar industry devoted to preventing your code from running on someone else’s system without their permission; that is the function of what used to be called anti-virus software, then anti-malware, now more broadly defined as endpoint protection. Most vendors in this space have publicly stated they will never give a pass to government code.

With all that in mind, let’s recap the story of what happened with WannaCry and NetPetya:

  1. The NSA is tasked with defending the US by gathering sensitive information. One way it has chosen to do that is by installing NSA software on computers and other devices without the knowledge or permission of their owners.
  2. Installing software on computers without the knowledge or permission of their owners has always been problematic, not least because it can have unexpected consequences. It can also serve numerous criminal purposes, like stealing or destroying or ransoming information.
  3. Back in the 1980s there were numerous attempts to create self-replicating programs (computer worms and viruses) that inserted themselves on multiple computers without permission. Many of these caused damage and disruption even though in some cases that was not the intent of their designers.
  4. Numerous applications designed to help computer owners block unauthorized code were soon developed. These programs were generically referred to as antivirus software although unauthorized code was eventually dubbed malware, short for malicious software.
  5. The term malware reflects the overwhelming consensus among people who work at keeping unauthorized code off systems: “the good virus” does not exist. In other words, compromising information systems with unauthorized code has no redeeming qualities and system owners have a right to protect against it.
  6. Despite this consensus among experts, which had grown even stronger in recent years due to the industrial scale at which malware is now exploited by criminals, the NSA persevered with its secret efforts to install software on computers without the knowledge or permission of their owners.
  7. Because the folks developing such code think of it as a good thing, the term “righteous malware” was coined.
  8. Eventually, the folks who warned that righteous malware could not be kept secret forever were proven correct: a whole lot it was leaked to the public, including EternalBlue.
  9. Criminals were quick to employ the “leaked” NSA code to increase the speed at which their own malicious code spread, for example using EternalBlue to help deliver cryptocurrency mining malware as well as ransomware.
  10. Currently there are other potentially dangerous taxpayer-funded malicious code exploits in the hands of US government agencies, including the CIA (for example, its Athena malware is said to be capable of hijacking all versions of Microsoft Windows, from XP to Windows 10).

So that’s how US government funded malware ends up messing up computers all around the world. There’s nothing magical or mysterious about it, just a series of risky decisions made in spite of warnings that the above could be the outcome.

For our CyCon paper, Andrew Lee and I created a new table, one that was mindful of all types of malware, including self-replicating code and injected code, such as trojans. Our table presented the “righteous malware” problem as a series of questions that any entity should have to answer before deploying such code. In the following version, the right answer to every the questions is Yes:

Clearly, the focus of our paper was the risks of deploying righteous malware, but many of those same risks attach to the mere development of righteous malware. Consider one of the arguments we addressed from the “righteous malware” camp: “There is nothing to worry about — if anything goes wrong after we deploy the malware, we can always deny that it was us that wrote and/or released the malware”. Here is our response from the 2014 paper:

This assertion reflects a common misunderstanding of the attribution problem, which is defined as the difficulty of accurately attributing actions in cyber space. While it can be extremely difficult to trace an instance of malware or a network penetration back to its origins with a high degree of certainty, that does not mean “nobody will know it was us.” There are always people who know who did it, most notably those who did it. If the world has learned one thing from the actions of Edward Snowden in 2013, it is that secrets about activities in cyberspace are very hard to keep, particularly at scale, and especially if they pertain to actions not universally accepted as righteous.

The world now possesses, in the form of WannaCry and NotPetya, direct proof that those “secrets about activities in cyberspace”, the ones that are “very hard to keep”, include malicious code developed for “righteous” purposes.

To those folks who argued that it was okay for the government to sponsor the development of code like EternalBlue because it would always remain under government control I say this: you were wrong. Furthermore, on this you will always be wrong. Based on 30 years of researching computer security I am sure there is no way for the creators of malware to ever guarantee control over their creations. The governments of the world would be well advised to conduct all of their cybersecurity activities with that in mind.

Furthermore, the idea of spending taxpayer dollars to find software vulnerabilities and keep them secret needs serious reevaluation. In the case of EternalBlue the price paid for the predictable consequences of this policy was enormous, and it was paid by people and companies who also pay taxes. Shooting yourself in the foot is bad enough, but having to pay for the gun and the bullets to do so should be beyond the pale.

*The International Conference on Cyber Conflict (CyCon) is organized by the NATO Cooperative Cyber Defence Center of Excellence (CCDCoE) in Estonia. The proceedings are typically published by IEEE but also available from CCDCoE.

This is a major revision of an article first published on zcobb.com, May, 2017.

--

--

Independent researcher into risk, tech, gender, ethics, healthcare, and public policy. A life spent learning, writing, and teaching. Based in Coventry, England.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Stephen Cobb

Independent researcher into risk, tech, gender, ethics, healthcare, and public policy. A life spent learning, writing, and teaching. Based in Coventry, England.