Gemalto: An Uneasy Sense of Deja vu

In Articles, Blog post by Clavister

Following the recent revelation that NSA and the British GCHQ hacked into leading digital security company Gemalto, potentially stealing millions of SIM card encryption keys, the IT Security and comms industry could be forgiven for feeling a strong sense of déjà vu. Less than two years since the initial Snowden leaks, the news should perhaps should come as no surprise, but once again businesses globally were left asking just exactly who was targeting their data.

While ‘lawful interception’, that is a well-documented, clearly traceable process with a legal basis and that offers no surprises, has always been accepted, the uncovering of widespread state sponsored hacks and surveillance has led to strong international condemnation. In the cold light of day it is a mere blurring of the lines of morality away from the illicit activity of the cybercriminals that businesses have relentlessly grappled with for the past two decades.

Following the initial Snowden leaks, Government officials rushed to inform companies and the public that PRISM, the NSA’s industrial scale surveillance project of all data and voice traffic, wasn’t being used on them, and that safeguards were in place to ensure that their data and records were not compromised. But with the Gemalto hack and reports the CIA are desperate to break Apple’s encryption, businesses can be forgiven for being unconvinced that they to aren’t being targeted.

Community Spirit Opens Backdoors

Of course it isn’t just state backed attacks posing problems for IT security departments, backdoors in networking equipment such as security gateways and firewalls must also be considered. While very different in nature, the Heartbleed and Shellshock attacks highlighted how even the most robust security solutions can be underdone by a weakness in coding. Indeed if they taught us anything it was that open source coding isn’t tested anywhere near enough.

Both exploited simple coding errors – the type of error that any developer could make – and the main issue was not the error itself, but rather the assumptions made by thousands of people globally about the integrity and security of open-source coding.   These errors and assumptions created widely-reported problems caused by the bug, making global headlines and the sparking a desperate scramble to close off the vulnerability.

Facing the Unknown

Whether it be state backed attacks or erroneous open source coding embedded in security solutions, the big challenge businesses face is the knowing exactly who is targeting them and for what purpose. As Gemalto, who had to conduct a massive internal investigation, proved it is almost impossible to know if you are being targeted by surveillance organizations. And of course there is no global Internet security task force that actively seeks and closes off coding vulnerabilities when they are discovered.

What is clear it that it is increasingly difficult for organizations to know who, and what solutions, they can trust. Any number of organizations within the supply chain could at any given time be require to provide information to a state government, potentially handing them keys to your data. At the same time a business could be one probing cyber-criminal away from discovering the very system they rely on to protect their network is compromised by a simple coding error.

So what can organizations do to protect their businesses against the risk of state intelligence gathering and of untested code being deployed in security solutions?

Firstly they must re-evaluate how they work with other businesses. Since PRISM, it has been safe to assume that the intelligence agencies of the superpowers have the ability to monitor businesses and individuals to gather information, seemingly unchecked, using in-depth knowledge of networking and security solutions and software. Armed with this knowledge organisations must ask awkward questions about who and how they interact with their partners and suppliers. Can equipment and software originating from countries involved in such information-gathering really be completely trusted and relied on for corporate security? Is it really wise to offer any sort of access to your corporate policies and information to companies whose state governments could demand access to, or indeed infiltrate without due legal process, at any given time?

Secondly businesses need to ensure that all solutions are rigorously developed, tested and re-tested to ensure any vulnerabilities are removed. IT must be about trust, founded on a solid technical basis. And that applies to consumers, major websites and IT vendors alike. It is clear that hackers are acutely aware that organizations are relying on vast amounts of untested coding in websites, apps, security solutions and more, offering them a plethora of opportunities to exploit. If organizations want to continue to use and realize the benefits of open-source it is evident that open-source coding must be vigorously tested to reduce potential vulnerabilities before it is deployed and assumed to be secure.

While in the past, businesses may have been afforded a margin of error that fostered trust, it is evident that that margin is shrinking rapidly. From PRISM to Gemalto and Heartbleed to Shellshock organizations put their faith in the transparency of government and the robustness of open-source coding without verifying that their trust was justified or deserved. And if organizations want to avoid déjà vu reoccurring all over again, unearned trust is a luxury they simply can no longer afford.