Trust within the identity world is a huge priority. Trust regarding the on-boarding and registration of external users via proofing (think assurance levels using identity validation and verification techniques) right through to creating trust labels for employees in order to monitor for malicious activity – that is either driven by external threat actors, insider threat or just unintentional bad user behaviour.
The other side of the coin of course is the trust towards a service provider from an identity – that they be trusted with my personal identifiable information?
But I want to briefly focus on the trust of the identity – and there are a couple of data points we can use for this. Firstly is the user (and I’ll stick to the identity of people here, instead of non-person entities, things and services for the time being) known to the service? By known, we can assume they have been enrolled to the system previously – either automagically or via self-registration. But at some point some data was created, persistently stored and could be used for a re-authentication and trust evaluation event.
The other data I suggest we could use, is visibility. Now this may seem quite an unusual attribute as it can be a little un-measurable, but being able to “see” the identity and the transaction or event they are looking to perform is an important step in being able to respond – with the appropriate level of friction at the correct time. Security preparedness often fails, not due to controls circumvention, but simply because there is no visibility with respect to an event taking place – or a subject to object relationship being created and maintained.
So what do we can with two data points? Well a basic two by two matrix is a good starting point as we can start to classify some polarised behaviours as per the following:
|Identity -v- Visibility||Seen||Unseen|
|Known||1 – Monitor Behaviour aka trust but verify||2 – Monitor Access Points aka egress and ingress flows|
|Unknown||3 – Adaptive Response – apply appropriate friction||4 – Black Swan! – risk assess|
1 – Known and Seen
Let’s start with the simplest combination: An identity is known and the activity is “seen”. This is a typical interaction for either authentication or authorization services. The fact that the event is seen allows a security control to be triggered – in which case we can borrow some zero trust terminology here and assume we can “trust but verify” once some steps have been taken to fulfil the control requirement. Perhaps a login event, or a policy evaluation request.
2 – Known and Unseen
Let’s try another scenario: one where an identity is known but the event they’re trying to perform is not seen. By this we assume either the object they’re trying to access or event they’re attempting to compete is either not under the control of a reference monitor, or perhaps has not been labelled or assigned permissions. So what can we do here? Well the assumption would be that some sort of “meta” monitoring could suffice. So whilst a particular user is known, some of the activities they’re performing can not be seen – so by looking at ingress and more importantly egress points of data flows or APIs can provide an ability to apply some controls. Think of this approach to be like the detectors often seen at shop doors – the shop can not see every shopper to item relationship – but looks to capture a theft event at a meta level on the way out – aka the egress filter.
3 – Unknown and Seen
This is quite an obvious one. In this case, we have a scenario where an event is seen (login, authorization request etc) but the identity is unknown. So here we have a basic control to authenticate the user (the classic HTTP 401). At this stage we don’t know them, so in order to complete the authentication event for example, they may need to enrol into the service in order to create a profile. This enrolment process of course will contribute to the level of static assurance associated with the identity – what identity data was provided, where was it provided from, how the data was validated and verified for example. The output of this adaptive response process will be a level of friction appropriate to the transaction being performed.
4 – Unknown and Unseen
So this combination is really the important part of the entire discussion on trust. A scenario where an event is not being seen and the identity is also not known to the service provider. A classic double-negative “I don’t know what nobody is doing”. I perhaps jokingly referred to this as a “black swan” event – one which occurs only infrequently but has a huge impact. An example scenario could be where an authentication or authorization event has not been verified or has been circumvented entirely and activity against the object is not being monitored. The consequence of this could be devastating – mainly as the impact may not be known for a period long after the event has taken place.
So what are the options? Firstly is to acknowledge this may well in fact be a real scenario. Control coverage against all objects may not be complete. And authentication and authorization gates may not always be applicable. But alas, security resources are finite, so an assessment exercise needs to take place that can help either reduce the impact of this occurring or attempt to reduce the likelihood of it occurring in the first place. This could involve the standard acceptance, transfer or avoidance tactics.
In summary, trust is a very transient yet hugely important aspect of the digital identity life cycle. It needs to be assigned, updated, managed with context and be dynamic all at the same time. It is no longer acceptable to just assign trust to an identity or a transaction as a static and immutable attribute. Trust needs to be consumed by a range of different actors, systems and services, in order to deliver personalised experiences, multi-device interactions and broad ecosystems of data collaboration. Being able to classify identities as either known and unknown is a useful and standard approach to be able to respond with appropriate levels of friction and security controls. The concept of visibility is novel, but may provide an interesting approach to increasing risk assessment methodologies to objects, transactions, events which may not currently be seen as being important to business functions.
About The Author
Simon Moffatt is Founder & Analyst at The Cyber Hut. He is a published author with over 20 years experience within the cyber and identity and access management sectors. His most recent book, “Consumer Identity & Access Management: Design Fundamentals”, is available on Amazon. He has a Post Graduate Diploma in Information Security, is a Fellow of the Chartered Institute of Information Security and is a CISSP, CCSP, CEH and CISA. His 2022 research diary focuses upon “Next Generation Authorization Technology” and “Identity for The Hybrid Cloud”.