The Drivers For Identity Intelligence

From the main view of Identity & Access Management 1.0 (I hate the versioning, but I mean the focus on internal enterprise account management as opposed to the newer brand of directory based federated identity management, commonly being called IAM 2.0...), identities have been modeled within a few basic areas.

The 3 Levels of Compliance
'Compliance by Review' (access certification or the checking of accounts and the associated permissions within target systems), 'Compliance by Control' (rules, decision points and other 'checking' actions to maintain a status-quo of policy control) and 'Compliance by Design' (automatic association of entitlements via roles based on the context of the user), probably cover most of the identity management technology available today.

I want to discuss some of the changes and uses of the first area, namely access review.  This periodic process, is often used to verify that currently assigned, previously approved permissions are still fit for purpose and match either the business function and risk, or audit and compliance requirements.  The two requirements are really the carrot and stick of permissions management.  From an operational perspective, automating the access review process has lead to the numerous certification products on the market, that allow for the centralized viewing of account data, neatly correlated to HR feeds, to produce business friendly representations of what needs to be reviewed and by whom.

The Failings of Access Review
The major failing of many access review campaigns is often associated with information overflow, or the lack of context surrounding the information presented for review.  For example:  asking a non-technical manager to approve complex RACF permissions or Active Directory group names will result in check box compliance, as the manager will be unsure which permissions should be removed.  Glossary definitions and incremental style certifications then start to reduce the burden and volume of information made available.  Whilst these are nice features, they're really just emphasizing the weakness in this area.

Use Your Intelligence
A commonly heard head teacher berate, is the 'use your brains' or 'use your intelligence' theme when it comes to managing easily distracted or unthinking pupils.  The intelligence is often present by default, but not naturally used.  The same can be said of access review.  To make the review process effective - and by effective I mean actually giving business value, not just complying to a policy - we need to think more about the value of doing it.  Instead of focusing on every application and every account and every permission, lets apply some context, meaning and risk to each piece of data.  Do you really need to verify every application, or just the ones that contain highly sensitive financial or client data?  Do you really need to verify every user account or just the ones associated with users in the team that processes the data.  Do you really need to certify every permission, or just the ones that are high risk, or perhaps vary based on the common baseline for that team or role?

Manage Exceptions and Let Average Manage Itself
By focusing on the exceptions, you can instantly remove 80% of the workload, from both an automation and business activities perspective.  The exceptions are the items that don't map to the underlying pattern of a particular team, or perhaps have a higher impact or approval requirement.  By focusing in this way, you not only lessen the administrative burden, but help to distribute the accountability in to succinct divisions of labour, neatly partitioned and self-contained.  If 80% of user permissions in a particular team are identical, capture those permissions into a role, approve that one singular role, then focus the attention on the exceptional entitlements.  Ownership of the role, it's contents and applicability, can then be removed from the view of the line manager in a nice demarcation of accountability, resulting in a more streamlined access review process.

Whenever I see a process being re-engineered with neat 'features' or add-ons, I think the time has come to start re-evaluating what is actually happening in the entire process.  Improvements in anything are great, but sometimes they are just masking an underlying failure