Skip to main content

Why Information Security Metrics Are Important

"He uses statistics as a drunken man uses lampposts - for support rather than for illumination" ~ Andrew Lang

Metrics and statistics, whilst subtly different, are often seen as the accountants yardstick and the pragmatists whipping stick.  The use of metrics in IT has had a long and perhaps uneasy route.  Technicians want to implement, design and fix.  Managers and budget owners need to show value, deliver service and ultimately keep the customer, production line or CFO happy.  An efficient and sustainable business position is a meeting place between the two, where tangible (and intangible) metrics (not statistics) are important to both parties.

Why Use Metrics?

IT security has often been seen as a cost within the overall component of IT, which until very recently was also seen as a cost to the business.  IT was a necessary component granted, but organisations have historically not seen IT as a strategic part of the overall business delivery cycle.  It was never capable of driving efficiencies, saving money or being proactive in gaining and keeping customers.  That view has changed considerably and information security is now becoming the necessary component within IT.

But what is the driver for security?  Well the main ones are probably compliance necessity, brand damage (especially if customer record losses occur) and the clean up costs of breaches.  So the CEO wants their company to be secure.  The infosec guys want the company to be secure, so what's the problem?

There are two main ones.  Firstly the non-infosec community within the IT component will often not have security as their default modus operandi.  That's not to say they security-averse, just not pro-security by default.  This can hamper design, policy and implementation.  Secondly, how do the ideas and strategies from the CxO level filter down to the infosec implementers?  One is talking budgets and ROI, the other is talking about standards, compliance, APT's, firewalls and DLP.

The use of some sort of metric driven analysis can not only aide implementation, but also help non technical members of the business understand the reason, rationale and benefit that a secure infrastructure can provide.  As a metric is a snapshot in time, it can also provide a useful benchmark for gauging performance and success of a particular project, policy or component.  This can not only help individuals, but also aide with budget realignment and project funding.

What to Measure?

The key to defining what to measure, is rooted in being able to define a framework that can help show progress and performance from all components of the infosec life cycle, whilst being of benefit to the board, IT and infosec teams.  To break this down further, it's important to understand what infosec posture the organisation is taking.  This should include what security policies have been created and how are they being implemented?  What systems, devices and data are being monitored, controlled or impacted by these policies?  In addition, it's important to understand the type and structure of the metrics being used.

Metrics don't always have to be numeric and tangible in structure.  Metrics can also be more subjective and intangible, covering things like brand awareness, confidence levels and so on.  For example, what is the damage to the a large on line retailer if they lost 100k customer credit card details?  The impact on brand and future custom could be quite difficult to measure tangibly, but that's not to say it can't me measured in some way.

The most obvious low level areas to cover would be things like anti-virus coverage.  A basic % showing the number of devices, those protected by AV software and the % with virus definitions older than 3 days for example.  Others could include the patch latency average.  This could be for particular servers, desktops or devices, showing the lag between a vendor released security update and the time taken to roll that update out.  Other more subtle measures could be for things like the number of password resets a help desk receives.  This could indicate if a password policy is too complex for users to remember their own passwords.  A password strength checking metric could also be used to see how successful a password education policy has been.

The catalogue of metrics should include both technical and non-technical aspects.  The underlying aim would be to show the general performance of the security infrastructure of the organisation.  Security isn't just about firewalls and access control lists.  It is about education, personnel and physical attributes too.

How to Measure?

The initial measurement should be recorded periodically and then used against other business and project data to show efficiency or at least at an attempt at a return on investment.  For example, on the basic antivirus approach I mentioned earlier, the following could be a good starting point:  perform an asset inventory of devices that could carry or could become the victim of a virus or malware attack.  Information such as device/service owner, the business impact if unavailable and perhaps previous downtime statistics would be useful too.  Next apply the coverage metric.  So identify which devices have some sort of antivirus protection installed.  Now isn't the time to question the why's, why nots and versions.  Just make a note.  Next could be another more detailed metric analysing if the antivirus definitions are within a certain threshold.  This threshold value should really come from the underlying security posture and policy surrounding antivirus protection.  The metrics will ultimately help to shape that policy in the long term. 

There's quite a lot of information already in that small metric.  It will undoubtedly require some sort of automation and will probably require assistance from system and network administrators.  This can often be a sensitive issue.  The task of scripting or dragging off the version and coverage data may require a bit of non-BAU work be carried out by a team which may not initially see the benefit of getting the data.  A discussion around the benefits to the general IT team of being able to measure this type of data is imperative here.  Focus on things like showing that ultimately it will draw positive focus on what perhaps was a mundane 'behind the scenes' job and assist with funding, upgrades, over time and so on, evening if in the short term, the results may not seem to be positive.

Reporting the Results

Clarity and simplicity need not be the same.  As the audience will undoubtedly be more business than technically focused, the data clearly needs to be written using business language.  This is not to say that the basic omnipresent traffic light system should be used all the time.  This seems only appropriate for the most basic of data types.  Whilst basic percentages in the previous example are useful for technicians, CxO and budget owners will want to know what that means from a down-to-earth real life impact perspective.  So what, 4 servers are 14 days behind in the antivirus roll out plan and two test mail relay appliances are not covered at all.  What does that mean to the customer or the impact on the service the business delivers?

Here real impact data should be used.  Monetary data is often useful, but isn't always the most easy to obtain.  For example, if a mail device is not protected, or is only partially protected using out of date definitions, the likelihood of a an outbreak will increase.  The costs to recover from an outbreak could be $100k split on consultancy, out of hours overtime and a % on unhappy customers who received spam from the malware that was 'released'.  It's the impact that budget or service owners are interested in.  That must always be the underlying theme of how the results are reported.  The impact on budget and/or customer happiness and delivery of key components that affect those initial two factors.

Ideally the business should have enough information from reading the report that they themselves can make an informed decision as to whether a particular security posture is being upheld or not.

The reporting process should be periodic as opposed to an annual audit style approach.  This will give a more regular, ingrained approach to security.  Ultimately, a metric driven approach is only a means to an end.  The end is to help ingrain security as part of the overall business and technical aspects of the organisation, where this is appropriate.  This proactive stance, will ultimately be more cost and effort efficient if a secure posture is required.

A metric driven approach will help to refine budget and identify weakness of course, but should also help show that information security is a more proactive and contributory discipline, with benefits to the entire business life cycle as opposed, to being a component of reactionary IT, used when necessary when something bad has happened.



Popular posts from this blog

2020: Machine Learning, Post Quantum Crypto & Zero Trust

Welcome to a digital identity project in 2020! You'll be expected to have a plan for post-quantum cryptography.  Your network will be littered with "zero trust" buzz words, that will make you suspect everyone, everything and every transaction.  Add to that, “machines” will be learning everything, from how you like your coffee, through to every network, authentication and authorisation decision. OK, are you ready?

Machine Learning I'm not going to do an entire blog on machine learning (ML) and artificial intelligence (AI).  Firstly I'm not qualified enough on the topic and secondly I want to focus on the security implications.  Needless to say, within 3 years, most organisations will have relatively experienced teams who are handling big data capture from an and identity, access management and network perspective.

That data will be being fed into ML platforms, either on-premise, or via cloud services.  Leveraging either structured or unstructured learning, data fr…

Customer Data: Convenience versus Security

Organisations in both the public and private sector are initiating programmes of work to convert previously physical or offline services, into more digital, on line and automated offerings.  This could include things like automated car tax purchase, through to insurance policy management and electricity meter reading submission and reporting.

Digitization versus Security

This move towards a more on line user experience, brings together several differing forces.  Firstly the driver for end user convenience and service improvement, against the requirements of data security and privacy.  Which should win?  There clearly needs to be a balance of security against service improvement.  Excessive and prohibitive security controls would result in a complex and often poor user experience, ultimately resulting in fewer users.  On the other hand, poorly defined security architectures, lead to data loss, with the impact for personal exposure and brand damage.

Top 5 Security Predictions for 2016

It's that time of year again, when the retrospective and predictive blogs come out of the closet, just before the Christmas festivities begin.  This time last year, the 2015 predictions were an interesting selection of both consumer and enterprise challenges, with a focus on:

Customer Identity ManagementThe start of IoT security awarenessReduced Passwords on MobileConsumer PrivacyCloud Single Sign On
In retrospect, a pretty accurate and ongoing list.  Consumer related identity (cIAM) is hot on most organisation's lips, and whilst the password hasn't died (and probably never will) there are more people using things like swipe login and finger print authentication than ever before.

But what will 2016 bring?

Mobile Payments to be Default for Consumers

2015 has seen the rise in things like Apple Pay and Samsung Pay hitting the consumer high street with venom.  Many retail outlets now provide the ability to "tap and pay" using a mobile device, with many banks also offer…