Using Tactical and Strategic Analysis to Track Threat Actor Targeting

by thesilence | 2025-08-08


Introduction

As intelligence analysts, we are responsible for providing timely and accurate reporting to our stakeholders. Our assessments support everything from technical hunting and detection to the reliable evaluation of organizational risk.

In Part 1 of this three-part series, we noted the critical link between the tactical data we collect and the strategic assertions we make based on that tactical evidence. We described our increasing reliance on higher-level assertions and the problems that result when these assessments get out of sync with our evidence. We examined how tactical evidence and strategic assertions are represented (and linked to each other) in Synapse, and used Storm to navigate between the two and ensure they remain consistent.

In Parts 2 and 3, we continue our survey of tactical and strategic data by looking at two additional threat-related assessments: targeting and motivation. We'll examine targeting (and untangle it from motivation!) here in Part 2, and address motivation in Part 3.


Background

Previously, we looked at the relationships between a threat cluster and the software, techniques, or vulnerabilities the cluster uses. There are two challenges to tracking these aspects of a cluster's behavior:

  • When we try to make use of third-party reporting, most reports do not include enough information (tactical evidence) to allow us to validate the reporter's strategic assertions.

  • Even for our own primary reporting, the evolving nature of threat clusters makes it easy for our tactical evidence and strategic assertions to get out of sync unless we monitor them carefully.

Despite these challenges, we can (with the appropriate evidence) objectively verify assertions about things like whether a threat cluster uses a specific vulnerability. Ideally our strategic assertion is based on tactical evidence (malware samples, log entries, forensic data) that is not in dispute.

Targeting and motivation introduce a new problem. Motivation is the purpose behind a threat's actions. Targeting also alludes to purpose, suggesting that a threat deliberately used a particular set of criteria to identify desirable victims. Both terms imply that we understand the reasons behind a threat cluster's actions - the why.

Threat actors rarely tell us their specific goals. This means that statements we make about why a cluster does something are assessments. These assessments should still be based on observation - we can't make claims about motivation in a vacuum. But the statements we make about motivation and targeting can rarely be traced back to "ground truth" in the same way as "Sparkling Unicorn uses CVE-2017-0199". This complicates both our own reporting on targeting and motivation as well as our attempts to cross-reference activity that we see with activity reported by third parties.


The Problem with Targeting

Threat profiles often include information about the kinds of organizations attacked by a particular threat. This information is commonly presented as the countries and industries targeted by the threat.

As we noted, targeting implies the deliberate selection of a victim based on criteria that align with the attacker's goals. Technically speaking, every attack is targeted (purposefully executed) for some definition of "targeted". But these specifics can vary widely:

  • A threat actor interested in an advanced materials manufacturing process may attack (target) the specific company that patented the process.

  • A threat actor interested in taking advantage of the latest 0-day vulnerability before it is patched may launch an indiscriminate attack to exploit (target) any Internet-facing vulnerable system.

In the above example, both attacks were deliberately chosen and executed. Both were targeted (based on the threat actor's particular goals), though the attacks differ dramatically in scope and specificity. As external observers, we see the attack and its effects. The attackers' goals are generally not clear from our outside perspective. And most importantly, the goals may be entirely unrelated (or simply incidental) to the victims' location or industry.

Let's look at this issue from another perspective. Through our research, we determine that the threat cluster Sparkling Unicorn compromised a pharmaceutical company located in the Netherlands. These facts may have nothing to do with why the company was compromised:

  • Maybe the company is one small part of a larger supply chain Sparkling Unicorn is interested in.

  • Maybe Sparkling Unicorn wants to use the company's network to attack a trusted partner.

  • Maybe Sparkling Unicorn thinks the company is likely to pay a ransom and can be easily extorted.

  • Maybe the company just happened to be vulnerable to a "spray and pray" style phishing campaign or automated network-based exploit attempt.

It may be misleading for us to say that Sparkling Unicorn targets the Netherlands or targets the pharmaceutical industry - this implies that simply being located in the Netherlands (or being a pharmaceutical company) is sufficient reason to put you in Sparkling Unicorn's crosshairs.

Targeting and Industries

"Targeted" industry names in particular often serve as a coded proxy for threat actor motivation. For example, the statement "Angsty Rutabaga targets think tanks" may actually be an oversimplification for the assessment "Angsty Rutabaga is interested in thought leadership on economic policy in South America". To obtain that information, Angsty Rutabaga may target a political science professor (education), a consulting firm specializing in overseas investment (professional services and/or financial services), or a ministry of economic development (government). The motivation for the activity gets oversimplified into Angsty Rutbaga targeting the pseudo-industry "think tanks", instead of the more accurate assessment that their goal is to obtain certain policy data, and that the related victims reside in a cross-section of economic sectors (industries).

One effect of mixing victimology with motivation is that we end up with a semi-arbitrary and inconsistent set of industry names used across intelligence reporting. The following are all industry names that appeared in public reports from various reporters:

  • defense

  • defense and aerospace

  • defense contractors

  • defense funding

  • defense industrial base

  • defense systems and equipment

  • military

  • military-related organizations.

Some reporters combine industries ("defense and aerospace") while other reporters track them as separate industries. And as we saw with "think tanks", some reported industries are not industries at all (e.g., "non-profits", "dissidents"). Needless to say, these inconsistencies make it difficult to compare and contrast threat activity reported by different organizations using different industry names.

Understanding the industries or locales (countries or regions) affected by threat activity can be a useful metric, providing a sort of "heat map" of current activity. But when our reporting conflates victimology with threat actor goals, we cloud the threat landscape rather than clarifying it for our stakeholders.

The Solution: Reframe Our Perspective

The good news is that when we shift our thinking around what we mean by targeting, we can cleanly separate victimology information from attacker motivation. We can easily identify and report objectively on the locations or industries of known victims. By making this portion of our reporting objective, we can cross-reference and validate our tactical data (individual victims) and our strategic assertions (generalized victimology or targeting) within Synapse.

Tip

When we distinguish victimology from threat actor motivation, we can categorize victim organizations by industry based on the goods or services they produce instead of the particular reasons we think they were victimized. We can avoid picking arbitrary industry names and (ideally) adopt a common set of industries based on an existing economic standard (such as ISIC, NACE, or NAICS).

Not only does this provide the intelligence profession with consistent categories and terminology for industries, it also allows us to have more meaningful discussions around the true economic cost of malicious activity through the lens of existing and widely recognized frameworks.


How it Works in Synapse

Threat activity in Synapse is represented by various activity nodes used to summarize key information about an event such as an attack, a compromise, or an extortion attempt. Most nodes used to model threat activity reside in the risk:* portion of the data model, with the exception of campaigns (ou:campaign). We associate activity with a particular threat cluster by applying the cluster's tag (its risk:threat:tag value) to the activity node:

_images/00_targeting.webp

Tip

For threat clusters that have matured into threat groups (and have an associated risk:threat:org), some activity nodes may also have a related property set (e.g., risk:compromise:attacker). This is less common (it can take years for a threat cluster to graduate into a threat group). Regardless of whether the risk:threat is a cluster or a group with an associated :org, the cluster's tag (risk:threat:tag) should be present on the activity node to associate (attribute) the activity to the cluster.

The country or industry affected by the activity comes from information about the victim. Where the victim of a given type of activity is an entity, we represent the victim as a set of contact information (ps:contact) in a property on the activity node (e.g., risk:compromise:target). For activity that can impact a wider range of resources (e.g., an attack can target a server, or an outage can affect a railway line), the affected object is linked to the activity node via a light edge (e.g., risk:attack -(targets)> *).

To identify the industries or countries affected by a particular threat cluster, we can navigate from the cluster's tagged activity nodes to information about the victim of that activity. To show how this works in practice, we'll revisit our notional threat cluster "Sparkling Unicorn".


Practical Examples

Threat Clusters and Victimology

Just as we saw in Part 1, our tactical evidence consists of tagged nodes. In this case, our tagged nodes are activity nodes. The activity must be linked to the target/victim, and the victim must have an associated industry (if we're tracking activity by industry) or country (if we're tracking activity by country).

_images/01_targeting.webp

At the strategic level, we represent the high-level assertions about victimology with a -(targets)> light edge:

_images/02_targeting.webp

Tip

If we are reporting activity by region - such as Asia-Pacific - we can create a geo:place node for the region that -(contains)> the geo:place nodes for the countries we include in that region (e.g., pol:country -> geo:place <(contains)- geo:place | uniq).

We can also view these strategic links using the Vertex Threat Intel Workflow:

_images/03_targeting.webp

Tip

Our ability to reliably report on threat activity depends on our data in Synapse being consistent and complete to the best of our ability. Our efforts will never be perfect, but if there are gaps in what we capture (e.g., a compromise that wasn't modeled, or a victim whose industry wasn't noted), this can skew our analysis.

There are also cases where similar information in Synapse can be modeled in slightly different ways. For example, an organization's country could be represented by the ou:org:loc property, or its :country:code, or its :country. We may need to agree on what information to capture and how to capture it (for consistency), or modify our Storm queries to account for some variations in how the data is modeled (or both).


Software and Victimology

Some organizations find it useful to identify the software targeting particular industries or locales (countries or regions). Software does not have intent, so it doesn't "target" anything. That said, knowing what software is being observed within various industries or countries may be useful as a snapshot of current activity.

In Synapse, we don't directly model software (risk:tool:software) affecting sectors (e.g., with -(targets)> light edges). Instead, we link software to the activity (attack, compromise, etc.) where that software was used via a -(uses)> light edge:

_images/04_targeting.webp

We can use Storm to trace the software used in various kinds of activity, through the activity nodes, to the associated victim organizations, to the sectors (industries or countries) where the software was observed. For example, the following Storm query identifies the countries where REDTREE software was observed, based on known attacks (risk:attack) and compromises (risk:compromise):

risk:tool:software:soft:name=redtree +:reporter:name=vertex
tee { <(uses)- risk:attack -(targets)> ou:org } { <(uses)- risk:compromise :target -> ps:contact -> ou:org:hq } |
uniq | :country:code -> pol:country:iso2 | uniq

In this case, our results show that REDTREE software was used against one or more victims located in Japan:

_images/05_targeting.webp

Using Storm to Query our Data

As we saw in Part 1 of this series, because our tactical evidence and strategic assessments are linked within Synapse, we can use Storm to navigate between the two and validate their consistency. To illustrate this, we'll use the example of the industries affected by the activity of our threat cluster, Sparkling Unicorn.

Synapse's data model includes a range of activity nodes. To get an accurate picture of a threat cluster's actions, we need to make sure our Storm query accounts for all of the relevant nodes (that is, all the kinds of activity we have captured and modeled).

It's also important to note that multiple activity nodes can represent related activity against the same victim. For example, a successful attack (risk:attack) can lead to a compromise (risk:compromise); a successful compromise may be followed by an extortion attempt (risk:extortion). We'll need to de-duplicate our victims once we identify all of the activity associated with a given threat.

In the examples below, we account for attacks, compromises, extortion attempts, leaks, and outages. We omit campaigns (ou:campaign) as (by definition) a campaign is a broader set of activity that would commonly consist of multiple attacks and/or compromises. You can of course modify these queries to fit your needs.

Tip

Remember - Storm can be stored for easy retrieval and execution on demand! Node Actions, triggers and cron jobs, and macros can all be used to save and easily run useful Storm queries and commands.


Tactical to Strategic

We can use our tagged activity nodes to identify the industries of the victims targeted by Sparkling Unicorn:

risk:threat:org:name='sparkling unicorn' +:reporter:name=vertex :tag -> syn:tag
-> (risk:attack, risk:compromise, risk:extortion, risk:leak, risk:outage) |
tee { +risk:attack -(targets)> ou:org } { +(risk:compromise or risk:extortion)  :target -> ps:contact -> ou:org:hq }
{ +risk:leak :owner -> ps:contact -> ou:org:hq } { +risk:outage :provider -> ou:org } | uniq |
:industries -> ou:industry | uniq

The same query with comments:

// Lift the risk:threat node for 'sparkling unicorn' reported by Vertex
risk:threat:org:name='sparkling unicorn' +:reporter:name=vertex

// Pivot from the threat cluster's tag to the associated syn:tag node
:tag -> syn:tag

// Pivot from the syn:tag node to the activity nodes we're interested in that have the threat cluster tag
-> (risk:attack, risk:compromise, risk:extortion, risk:leak, risk:outage) |

// Pivot to the organizations associated with each type of activity
tee { +risk:attack -(targets)> ou:org }
{ +(risk:compromise or risk:extortion)  :target -> ps:contact -> ou:org:hq }
{ +risk:leak :owner -> ps:contact -> ou:org:hq }
{ +risk:outage :provider -> ou:org } |

// De-duplicate the results
uniq |

// Pivot from the organizations' :industries to the associated ou:industry nodes
:industries -> ou:industry

// De-duplicate the results
| uniq

Using our evidence nodes (tagged activity nodes), this query gives us the set of unique industries (ou:industry nodes) associated with Sparkling Unicorn victims:

_images/06_targeting.webp

We used the United Nations' International Standard Industrial Classification of All Economic Activities (ISIC) to assign a primary industry to each victim. For these examples, we classified victims based on ISIC's 22 top-level industries (which makes it relatively easy to assign industries to victims). However, if we want to track victims' economic activity in more detail, we can use ISIC's 800+ industries to assign highly specific categories to each victim. Because ISIC is a hierarchical system, these detailed categories can still be rolled up into their 22 top-level parents for more generalized reporting.

Tip

An organization may be part of more than one industry; this is especially true of large corporations and conglomerates. You may wish to decide (based on your needs and internal processes) whether organizations should be assigned a single primary industry, or how to address victimology reporting in cases where a victim is part of multiple industries.

Note: In the query above, we de-duplicated the resulting ou:industry nodes with the uniq command to display the set of unique industries targeted by Sparkling Unicorn. We can remove the uniq command if we want to see a statistical breakdown of the industries (i.e., which industries are impacted most frequently). The image below shows that Sparkling Unicorn most often victimizes organizations in the financial and insurance sector:

_images/06_targeting_2.webp

Strategic to Tactical

Similarly, we can use the industries that Sparkling Unicorn -(targets)> (based on our high-level profile) to navigate to the activity nodes that provide evidence of victims in those industries:

risk:threat:org:name='sparkling unicorn' +:reporter:name=vertex $tag = :tag -(targets)> ou:industry -> ou:org | uniq |
tee { <(targets)- risk:attack } { :hq -> ps:contact -> (risk:compromise:target, risk:extortion:target, risk:leak:owner) }
{ -> risk:outage:provider } | +#$tag

The same query with comments:

// Lift the risk:threat node for 'sparkling unicorn' reported by Vertex
risk:threat:org:name='sparkling unicorn' +:reporter:name=vertex

// Capture the :tag value as $tag
$tag = :tag

// Pivot to the industries the threat targets
-(targets)> ou:industry

// Pivot to the unique orgs that are part of those industries
-> ou:org | uniq |

// Use tee to pivot to activity nodes for those orgs
tee { <(targets)- risk:attack } { :hq -> ps:contact -> (risk:compromise:target, risk:extortion:target, risk:leak:owner) }
{ -> risk:outage:provider } |

// Filter to only those attacks / compromises tagged with our threat
| +#$tag

The query returns the activity nodes associated with Sparkling Unicorn that have a victim organization within an industry that is linked to our threat via a -(targets)> edge:

_images/07_targeting.webp

Using Storm to Validate our Data

These queries illustrate pivoting from tactical to strategic data and vice versa, but they do not check that both aspects of our data are consistent with each other. That is, so far the queries navigate from one set of data to the other, but do not highlight any discrepancies. We need to modify our queries to do this. (Note that to fully validate our data, we need to run both queries below.)

Our first example above takes the tagged activity nodes for Sparkling Unicorn and returns all industries associated with victims of that activity. But the query does not verify that each of those industries also has a -(targets)> edge from our risk:threat. We can modify our query to check this (comments for relevant additions to the query):

risk:threat:org:name='sparkling unicorn' +:reporter:name=vertex

// Capture the risk:threat as $threat
$threat=$node.value()

:tag -> syn:tag
-> (risk:attack, risk:compromise, risk:extortion, risk:leak, risk:outage) |
tee { +risk:attack -(targets)> ou:org } { +(risk:compromise or risk:extortion)  :target -> ps:contact -> ou:org:hq }
{ +risk:leak :owner -> ps:contact -> ou:org:hq } { +risk:outage :provider -> ou:org } | uniq |
:industries -> ou:industry | uniq |

// Filter out industries already linked by a -(targets)> edge
–{ <(targets)- risk:threat = $threat }

This query will return any industries (ou:industry nodes) associated with Sparkling Unicorn victims that are not linked to our threat by a -(targets)> edge. We can add any missing edges (or review our activity nodes to see if something was tagged or modeled incorrectly). If no nodes are returned, then all of the victim industries are linked correctly!

Based on the results of our query, we have Sparkling Unicorn activity nodes associated with victims in three industries that are not linked to our risk:threat node with -(targets)> edges:

_images/08_targeting.webp

Our second example above takes the industries that Sparkling Unicorn -(targets)>, and returns the activity nodes with victims in those industries. But the query does not verify that each targeted industry has at least one associated activity node. We can modify our query to check this (comments for relevant additions to the query):

risk:threat:org:name='sparkling unicorn' +:reporter:name=vertex $tag = :tag -(targets)> ou:industry

// Make the latter part of our query into a subquery filter
// Filter out the ou:industry nodes that have an associated org that is a victim of our threat
-{ -> ou:org | uniq |   tee { <(targets)- risk:attack }
{ :hq -> ps:contact -> (risk:compromise:target, risk:extortion:target, risk:leak:owner) }
{ -> risk:outage:provider } | +#$tag }

This query returns any industries (ou:industry nodes) that Sparkling Unicorn -(targets)> that do not have a corresponding activity node with a victim in that industry. We can now review our data to see if we need to add some missing evidence, or remove some industries that we've incorrectly listed as targeted. If no nodes are returned, then all of the industries that Sparkling Unicorn -(targets)> have at least one associated victim and are linked correctly!

Based on the results of our query, our strategic data says that Sparkling Unicorn -(targets)> victims in the manufacturing and health services industries, but we have no activity nodes with victims in these sectors:

_images/09_targeting.webp

Conclusion

As intelligence analysts, we're often asked to report on the industries or countries targeted by various threat actors. In practice, we often misuse the term, mixing victimology (objective information about victim organizations) with attacker motivation (our assessment of the particular reason behind an attack). Conflating the two can lead to a lack of clarity in our reporting. In particular, this has contributed to a confusing set of industry names used within the intelligence field. Reporting organizations may create and assign industry names to victims where the names are chosen semi-arbitrarily, and may even reflect (in whole or in part) why the reporter believes the victim was targeted (as opposed to what the victim organization actually does).

If we reframe our reporting of targeting as victimology, we can cleanly separate who was affected by threat activity from why they were affected. We can use Synapse to represent threat activity, link the activity to victims, and use Storm to both query the data and (importantly) ensure that any higher-level assertions we make about threat actor targeting are consistent with our evidence.

If you missed Part 1 of this series, be sure to read up on using Synapse and Storm to capture (and validate) the software, techniques, or vulnerabilities used by a threat actor. And stay tuned for Part 3, where we'll dive into threat actor motivation, and describe how we can use Synapse to build a more nuanced understanding of attacker goals and objectives.


To learn more about Synapse, join our Slack community, check out our videos on YouTube, and follow us on Bluesky or Twitter.