Cyber Security and Emerging Threats

Digital Weapons: Governments, Stockpiles, and International Trading

Governments tend to produce their own cyberweapons in-house, tailoring them to the specific systems they want to infiltrate. However, they often buy its component code from third parties. Specifically, governments are major buyers of ‘exploits’ – a kind of code that targets a system’s vulnerabilities to give hackers undue powers within it. Exploits are the delivery system of malicious code, like the rocket to a missile. They’re valuable and bought and sold in ‘grey’ markets of ambiguous legality and pervasive secrecy. The notable thing about an exploit is that, once one becomes aware of it, one can patch the vulnerability it targets, thereby rendering the exploit useless. Therefore, when governments purchase exploits they’re put into a difficult ethical quandary: do they tell people about it or not? Revealing the exploit, and working with firms to neutralize it, increases citizens’ overall cybersecurity, but deprives the government of a potentially valuable weapon. Keeping it secret does the opposite, increasing a government’s offensive capabilities while leaving citizens vulnerable.

It’s not a theoretical problem. In 2014 the Heartbleed vulnerability affected hundreds of thousands of users, allowing hackers to potentially steal their social security numbers from the Canada Revenue Agency. It was alleged that the National Security Agency (‘NSA’ – one of America’s intelligence agencies) had known about the vulnerability for two years, and, by failing to disclose it, bore partial responsibility for exposing Canadians to the risk of fraud. Though the NSA denied prior knowledge of Heartbleed, the accusations seemed sufficiently plausible for the Obama administration to announce that, going forward, it would have a ‘bias’ towards disclosing new vulnerabilities. What that means remains a mystery to most commentators and harkens to a more general problem. A lack of transparency on government cyberwar programs, while necessary to the success of these programs, prevents citizens from evaluating what kind of risk their governments expose them to, which then creates gaps in democratic accountability. This differs from traditional warfare, where offensive capabilities have little direct bearing on domestic security. Producing a new fighter jet doesn’t make citizens more vulnerable to being bombed, for example.

While governments do make rhetorical appeals to the importance of defending domestic cybersecurity, there are simply too many incentives to maintain robust stockpiles of exploits. A recent RAND report, based on previously-secret data, indicates that exploits have an average lifespan of approximately seven years. An exploit’s lifespan marks the time between when an exploit is developed to when it is effectively “killed”, meaning when the vulnerability it targets is patched in enough systems to render the exploit functionally worthless. What happens between birth and death isn’t known in detail. An exploit can be bought and used right away, but designed well-enough that it remains undetected for years. Alternatively, it can sit on a shelf for years and then, once used, be detected relatively quickly. A seven-year lifespan is surprisingly long. While it doesn’t mean that the average exploit is used for seven years, it implies that exploits tend to sit unused for a substantial period of time, meaning that government stockpiles expose citizens to risk that is measured in years, not months. Though cyber policymakers are still sifting through the ethics of this system, and the public is slowly becoming more aware of its trade-offs, we’ve yet to see a comprehensive regulatory or legislative response.

Beyond domestic stockpiling, there’s also the question of how exploits are traded internationally. While exploits can be weaponized, they’re ultimately just pieces of code and so they fit poorly into pre-existing arms control frameworks. In the absence of clear regulations on how they can be sold, the grey market has stepped in with some level of self-regulation. Private firms that buy or develop exploits make assurances that their customers only include states approved by NATO, the EU, ASEAN, and so on. These assurances are often met with justifiable skepticism. Firstly, a lack of formal oversight makes them difficult to guarantee. Secondly, firms have historically had no issue selling to countries that, while not necessarily branded as enemy states, are nonetheless morally questionable customers. In one high-profile case, ‘Hacking Team’, an Italian firm, sold tools to Ethiopia that were later used to infiltrate, monitor, and disrupt Ethiopian journalists operating in the United States and Europe.

Some headway is being made into regulating international exploit trading, though solutions have largely been haphazard and relatively uncoordinated. Regulators scored a major victory through an amendment to the Wassenaar Arrangement, an arms control pact which, at the time of writing, has 42 participating states – mostly EU members, along with some other significant players such as the United States, Canada, India, and South Korea. In 2013, Wassenaar categorized exploits as dual-use technology, formally recognizing that they had both a civilian and military component. This consequently placed them under strict export controls. While this is a major achievement in regulating exploit trading, one that shows states increasingly protecting citizens from the risks of digital weapons, it isn’t without limitations and hiccups. Wassenaar is only a voluntary framework and it lacks any mechanism to enforce its policies. It also only covers an elite class of countries. This may limit the filtration of exploits from rich democracies to poor autocracies, but, mirroring traditional arms control, it does little to stop delinquent nations from disseminating exploits to each other. Lastly, the Wassenaar amendment demonstrates a more general problem in cyber-legislation: a lack of clarity on terminology. The initial language defined exploits too expansively and risked criminalizing researchers using technologies that, while similar to exploits, aren’t weaponizable. Cyber-related firms pushed to fix this, but it took five years to rectify the language.

Featured Image: Computer restart screen. Via Pexels.com

Disclaimer: Any views or opinions expressed in articles are solely those of the authors and do not necessarily represent the views of the NATO Association of Canada.

Adam Zivo
Adam Zivo is a social entrepreneur, photographer, and content producer. His past clients include brands such as America's Next Top Model, Flixel, and Bell Media. He is the founder and director of LoveisLoveisLove, an LGBTQ+ arts campaign that has engaged 400,000+ people to date. Adam completed his Bachelor of Arts in Philosophy at the University of Toronto, and in 2018 will be commencing his Masters of Public Policy and Governance at The Munk School, University of Toronto. Adam maintains a broad knowledge base, but is particularly focussed on cyber and information warfare, the political use of social media, as well as larger intersections of technology and governance.
http://natoassociation.ca/about-us/adam-zivo/