An indicator of compromise (IOC) can be defined as a piece of information that can be used to identify a potential compromise of the infrastructure: from a simple IP address to a set of tactics, techniques and procedures used by an attacker during a campaign. Although when we think about IOC in our mind are IP addresses or domain names, the concept is a wider one; based on its complexity and related to the granularity of data represented by them, we can face three kinds of indicators:
- Atomic indicators: those that can not be broken into smaller parts and retain their meaning in the context of an intrusion, as an IP address or a domain name.
- Computed indicators: those derived from data involved in an incident, such as a file hash.
- Behavioral indicators: those that, from the processing of the previous ones, allow an analyst to represent the behavior of a threat actor, its tactics, techniques and procedures (TTP).
TTP are linked to operational intelligence, while atomic and computed indicators are linked to tactical one, and in this group, with a short lifetime, is where most of the shared indicators are: more than a half of shared IOC in threat intelligence sharing platforms are as simple as hashes, IP addresses and DNS domains. The problem? The one we have been facing many years and that is perfectly shown in the “Pyramid of Pain”: it’s trivial for an attacker with basic capabilities to evade detection based on these kinds of indicators. In fact, the three most common shared indicators (hashes, IP addresses and domain names) are the easiest to evade, which limits its usefulness.
For an attacker, it’s trivial to modify a hash -from compilation to execution time-; to change an IP address used as a C2 or exfiltration server is also easy, as well as changing domain names, with minimum trouble. So a threat actor with basic capabilities can evade detection based on these kinds of indicators, as exposed in “The Pyramid of Pain”. But dealing with behavioral indicators, with TTP, to modify them is harder for an actor, so if we are able to detect these modus operandi, our success at detecting compromises will increase.
So, being atomic and computed indicators not the most useful, why are there the most used and shared ones? In my opinion, the answer is simple: they can be loaded automatically in security tools, so the provide immediate results. If we receive a tactical intelligence feed, for example a list of domain names, IP addresses and such indicators, we can load this feed in our perimeter appliances to detect activities linked to the indicators. We can query our SIEM to look for suspicious historical activities against these domains or addresses, and we can schedule that query to get noticed about such activities in near real-time. In summary, these indicators don’t need any human operation to be actionable. On the other hand, if shared information is linked to operational intelligence, to the actor’s TTP, these behaviors are usually described in a documental way or, at least, in a way harder to automate and to turn into actionable that atomic and computed IOC. Even a standard such as STIX, which allows TTP specification through its objects, is not immediate to turn into automated intelligence.
So, which is the situation? Most of shared intelligence is easily evaded by an attacker and has a short lifetime, so it’s not the most useful one. On the other hand, operational intelligence, most useful and with a with a longer lifetime, linked to behavioral indicators, is barely shared, perhaps because it requires a manual treatment. If we want to “cause pain” to the attacker we must focus our IOC in its TTP, no in its low-level indicators.
To identify TTP in many cases it’s mandatory to establish relationships between security events; these relationships are usually temporal ones, but they can also be linked to dependencies between activities, for example. To detect them in an automatic way we need at least two elements; the first one is an acquisition and processing capability where actions are registered, not only linked to alerts (misuses or anomalies) but also to normal activities in the infrastructure. This capability is usually the SIEM, where all useful security information is centralized, from endpoints to network events.
Once information is gathered and processed, we need an automatic analysis capability: something to query the SIEM and extract TTP (by establishing relations); in this point, different providers use different approaches. While Microsoft has defined KQL, Kusto Query Language, Elastic also provides rules to hunt. But these approaches work only on Microsoft or Elastic ecosystems, so they can’t be used “as is” outside them, which prevents an effective information sharing scheme. SIGMA rules try to provide a generic, independent approach, and maybe it can become a used standard in short term; it’s an excellent initiative that provides a common language for well-known SIEM platforms. But even SIGMA doesn’t natively allow the specification of specific TTP known to be used by advanced threats, so it should be improved or complemented with post processing capabilities, to open the scope of its detection capabilities.
In summary, we share the easiest to use intelligence, but not the better one; to detect advanced threats’ activities we must share more valuable intelligence, and to achieve this we need that this intelligence can be automatically processed in every suitable environment, as an standard. Until we don’t achieve this goal, we will be focusing in the less valuable indicators, those that can be easily evaded by threat actors. But those indicators are also valuable, and we must continue sharing them as we have been doing: they are not the best ones, but they also provide value to our detection capabilities. We “only” have to expand our capabilities, not to discard the current ones and to implement something completely new.
Leave a Reply