Shifting Gears from IOCs to IOBs
The after-the-fact nature of IOCs is one of their clearest limitations. They are documentation artifacts (hash of a file, reputation of an IP, known-bad URLs, in-memory footprint, etc) based on an isolated action after it has occurred. Too often still, their 1:1 mapping where an IOC triggers an alert which is then triaged by a Security Operations Center analyst to review or take action on leads to alert overload. Even though advanced SIEMs, UEBAs, and threat intelligence platforms can help reduce a handful of false positives through automation, they still occur at excessively high rates.
Limitations of IOCs
Another key limitation: IOCs were designed for an infrastructure security-centric world. And the world has been changing for years. The current pandemic accelerated this change as organizations now struggle to secure hybrid IT environments: your corporate “network” is now made of thousands of “branch offices of one” as employees work-from-home. That is why we believe users are the new perimeter, not the network anymore, and also that data gravity changed the information protection game. In this reality, IOCs simply fall short.
Besides the sheer volume, the bigger challenge is that IOCs are derived from actions that occur in isolation, lacking context. As standalone events, IOCs remain difficult to assign a priority to, and are even more difficult to keep updated and current. Assuming security teams are able to handle those challenges, what’s the life span of an IOC? How and when does an IOC expire? How much “noise” is there in threat intelligence feeds?
Forcepoint’s Goals with IOBs
Risk Scores are Key
An IOB is the way a user, device or account conducts itself. Our teams designed dozens and dozens of IOBs with the clear goal of addressing IOC’s shortcomings. For IOBs, both the context and the timeline (the “killchain” equivalent) are key. IOBs focus on understanding the context around how your employees interact with the organization’s data and systems over time in a much broader way. With them, context for example means understanding a user’s typical behavior, the timeframe, applications used, the actions they are taking and the outcome they are trying to achieve.
Our risk computation engine is key to make IOBs effective. Each IOB defines a base risk contribution along with a decay over time, and depending on further context, the risk contribution can adapt. All of this is in service of getting to a key outcome—true risk adaptive protection for users. IOBs enable a shift from a reactive reality to a proactive one. IOBs and the dynamic risk scores they power allow security leaders to anticipate malicious activities like data exfil, compromised user credentials or other insider threats. Most importantly, they help security teams stay left of breach.
Controlling and monitoring application and data access is only one part of it. IOBs also factor in actions in context of each other to produce an overall risk score. Typical employee behaviors like accessing approved applications and data shares won’t adversely impact a user’s risk score. But risky behaviors like taking a screenshot of confidential documents, shared in a zoom session, to save on a USB key or a cloud storage service, or printing those same critical documents at home will negatively impact a person’s score.
Take a look at my GovWare 2020 slides to get a deeper look into IOBs, our design goals, how we categorize them and how they will be a key component in an organizations’ cybersecurity path forward.
This post was first first published on Forcepoint website by Nicolas Fischbach. You can view it by clicking here