Threat Intelligence Automation: Moving Beyond IOC Collection
We must advance threat intelligence practices beyond simply collecting indicators if we are to remain effective against the kinds of sophisticated adversaries we face today in our rapidly evolving cybersecurity landscape. In my role working cybersecurity automation and threat...

We must advance threat intelligence practices beyond simply collecting indicators if we are to remain effective against the kinds of sophisticated adversaries we face today in our rapidly evolving cybersecurity landscape. In my role working cybersecurity automation and threat intelligence, I have firsthand experience with how traditional approaches to threat intelligence carry too many inefficiencies and are too slow to keep up with modern attack methodologies.
Collecting and distributing Indicators of Compromise (IOCs) has been the focus of traditional threat intelligence. IOCs are specific artifacts that indicate when and where a malicious actor has done something, like setting up a web server or spreading malware. IOCs are things like IP addresses, domain names, file hashes, and other static indicators that suggest malicious activity. However, while these indicators are valuable and can help an organization prepare for or respond to an incident, they do not tell the whole story.
Today’s threat actors are much more sophisticated and aware of how static indicators can be used to identify them. In response, they have adapted and now use a number of techniques to obfuscate their activities, including:
Rotating infrastructure very rapidly and in a very diverse manner that’s difficult to keep track ofUsing algorithms to generate large numbers of domain names that they then use in their attacks and that they switch in and out of for command-and-control purposesPathological attachment to living off the land (also known as LOLbins) and using their own tools to do their dirty work for themThis means that the traditional indicators of compromise (IOCs) that we could use to detect threats are getting less and less effective.
The constraints of using static indicators have pushed the field to evolve toward detection models based on behavior. These models work on a completely different principle: instead of just saying what an attack looks like, in terms of an indicator, and hoping to catch the next attack with that kind of description, these models analyze the attack’s unfolding activity across systems and networks.
This change requires security teams to develop a much richer understanding of adversary tactics, techniques, and procedures (TTPs). By focusing on behaviors rather than static indicators, security teams can maintain detection capabilities even as attackers change their infrastructure or tools. The key insight here is that while indicators change frequently, the fundamental behaviors and objectives of attackers remain much more consistent over time.
To detect threats using behavior-based methods, security teams must first determine what constitutes normal behavior for their networks, systems, and users. This isn’t easy. It requires the teams to understand their environments really well. They must then use that knowledge to identify when and what sort of anomalies appear that might indicate the presence of a threat actor. Correlating those anomalies (and any events associated with them) into a coherent story about what’s happening takes a more capable engine than most firewalls have.
Today, threat intelligence is processed by artificial intelligence and machine learning. These are the basic components of a modern threat intelligence workflow. In the past, humans did most of the work, hand-coding intelligence. In those days, it was incomprehensible to humans for a long time just because of the mind-numbing patterns involved and the mind-boggling number of pattern variations.
Particularly valuable is unsupervised anomaly detection, as machine learning models can establish baselines of normal behavior and flag deviations that might even indicate compromise, without prior examples of specific attack patterns. This provides a key capability for detecting novel threats for which no signatures exist.
Techniques from natural language processing can automatically pull valuable intelligence from pool extracts of unstructured data taken from intelligence sources like security blogs, research papers, and threat reports. This substantially boosts the manual effort otherwise required to maintain current awareness. Equally important, it also allows intelligence analysts to pour their efforts into actual intelligence analysis. Instead of having much of their manpower pool taken up with the effort of maintaining current awareness, they can pour much more time and effort into the actual act of intelligence analysis.
The application of predictive analytics to threat intelligence is perhaps the most promising. By looking at historical attack data, predictive models can identify which organizations, sectors, or systems are most likely to be targeted next. This allows for the kind of proactive defense that should be the norm rather than the kind of reactive patching that tends to happen after an organization has been breached.
Taking a valuable signal from background noise is one of the most significant challenges in threat intelligence. The Security Operations Center (SOC) teams are bombarded with alerts. And in trying to make sense of the situation, momentarily consider probably one signal in a cohort of two or three. In this quartet of signals, is the difference between a SOC being functional and dysfunctional. It takes approximately eight seconds to read a signal and another eight seconds to consider all the plausible scenarios. And then there is the scenario when these threats are multiplied by the ten or twenty signals variety.
Basic indicators of compromise must be enhanced with very specific context about their relevance, signaled next to even more basic ‘threats in the wild,’ to keep security teams from wasting precious time on appearing-to-be-significant-but-actually-lame false positives and low-priority threats that just aren’t that threatening. Benign, basic IOCs are not actionable intelligence. This brings us back to how we do (or don’t do) threat prioritization.
Correlating information across many feeds and internal telemetry data is what automated systems do best. They relate, they connect, and they understand (in a manner of speaking) what the separate parts signify as a whole. This is why the NTOIC needs automated systems and why analysts need the systems to work well. These systems carry an abundance of internal logic that allow them to cover many high-value targets (HVTs) with an eye on ensuring that far fewer false alarms are sent to the analyst on duty.
When we view the future, a number of developing trends will mold the evolution of automated threat intelligence. We will probably see an even deeper infusion of threat intelligence directly into security controls, smartly enabling real-time defensive adjustments without human helps. This infusion will cut response times from hours or days to mere seconds and limit the impact of many attacks.
In the future, all but the simplest platforms will facilitate secure and intuitive sharing of threat intelligence, even across different kinds of organizations and sectors. This means that when the privacy and competitive-sensitivity concerns that inhibit sharing today are resolved, automated sharing will happen in a manner akin to how threat intel is shared manually nowadays. This is still a very collaborative internal platform, but it also makes room for a much larger collaborative model involving many different sorts of organizations.
Using AI to evade detection is something we are starting to see attackers do. Security teams that could once count on a perimeter defense and a set of well-defined rules to detect and block intruders now need to take another look at their technology stack and definitely incorporate some adversarial machine learning techniques if they’re going to maintain anything resembling an effective defense.
The development of threat intelligence from simple IOC collection is a necessary response to the rise of ever-more-sophisticated threat actors. Now, instead of relying solely on the collection of indicators of compromise (IOCs) to describe threat activity, vendors and internal teams are providing context around signals—why a signal or set of signals might indicate potential malicious activity; what kind of threat actor might be behind it; and what the actor might actually be trying to accomplish.
Some teams have gone so far as to try to embrace behavior-based detection, essentially a poor man’s red teaming. Many products these days are quite good at identifying a pattern. Indeed, modern security operations have turned signal-to-noise ratios into a science, and teams can now identify important patterns in the noise on a much more constant basis.
Companies that manage to put these advanced methods of threat intelligence into action will be far better positioned to spot and counter new threats before they can do much damage. As security professionals, we must constantly evolve our playbooks to stay one step ahead of the determined adversaries we’re up against in this never-ending technological arms race.