A breach. A data loss incident. An insider leak. A media report of client data loss. All would probably bring about a mild panic attack for most CISO’s. Eventually and dependent on the size of the organisation, that data breach will end up in the public eye, either via official acknowledgement that a breach had occurred – as is required by say the UK Information Commissioners Office – or a simple media response to explain that ‘everything is under control’. Ultimately that public information, could damage the brand and future customer base of the organisation. Dependent on the industry and type of product or service that is being offered, the damage could be irreparable.
The sources of data breaches and losses are many and complex, with new and complex attack vectors appearing all the time. If we could quickly categorize a data breach we would probably come out with a list something like this:
- Malicious cyber attack
- Malware within the corporate network
- Negligent employee (laptop loss, USB loss)
- Malicious insider
- Careless insider (erroneous data copying, emailing of confidential data)
- Mis-configured software and hardware
Whilst that is only a high level view, it would cover a multitude of data loss scenarios for many organisations. In response to a known threat, there are several process and technology counter measures an organisation could implement to reduce or ultimately remove the threat.
- Malicious cyber attack > Firewall, Intrusion Detection System, Intrusion Prevention Systems
- Malware within the corporate network > Anti-virus, SIEM logging, abnormal event monitoring
- Negligent employee (laptop loss, USB loss) > Data Loss Prevention, data & asset management
- Malicious insider > Event monitoring, access monitoring
- Careless insider (erroneous data copying, emailing of confidential data), DLP, event monitoring
- Mis-configured software and hardware > Baselining, penetration testing, auditing
Each counter measure would be applied using a standard risk framework to identify any vulnerabilities and any threats that could exploit those vulnerabilities. In turn, a structured approach to counter measure selection would be done in order to provide a decent return on investment with regards to the loss expectancy before and after a counter measure was put in place – less of course the cost of the counter measure.
This is basically following the standard Annual Loss Expectancy = Single Loss Expectancy X Annual Risk of Occurrence. The counter measure selection would be based on an implementation that would be lower than the ALE. This assumes that the ARO is accurate (which is often not the case) and that the ALE is accurate (which is often not the case).
So, if in one year the ARO was zero, would their viewed return on investment of the counter measure be higher or lower? Well if you’ve never been attacked or had a vulnerability exploited it could be difficult to quantify the true effect of the existing counter measures. On one hand it could be argued, the counter measures are infinitely worth more than the actual cost of implementation, as the assets their are protecting have never been exposed. It is the potentially the case however, that the loss of an asset is worth infinitely more once lost than when secure, so it would only take one loss to reduce all protection measures to have been meaningless.
I think in practice, if an organisation has identified a failing in a process, product or scenario that has resulted in a data breach or loss, it becomes politically justifiable to implement further counter measures above and beyond the ALE, due to the non-tangible effects of such a loss. Similarly, if the ARO was zero, could the reduction of the counter measure be justified for the following year?
Of course, their are probability analytics that could be applied to help formulate a result mathematically, but the costs of brand damage, reputation and future trade loss are often difficult to quantify, which could result in a ‘belt & braces’ approach from a post-breach organisation.