Sanctions screening gets the attention. The alert investigation gets the work.
A sanctions alert is not a result. It is a request for manual investigation. The alert tells the analyst that a potential match exists between a screened entity and a sanctions list entry. It does not tell them whether the match is genuine, what action to take, or how to document the decision in a form that will hold up under regulatory scrutiny.
That process, sanctions resolution, is where compliance quality is actually determined. It is also the part that has almost no dedicated tooling, almost no standardisation, and almost no visibility at the leadership level until something goes wrong.
What the Screening Tool Hands Over
Most alerts contain too little information to make a decision inside the tool. A typical alert includes the name of the screened entity, the name of the matched list entry, a similarity score or match indicator, and a reference to which sanctions list the entry appears on. Some tools provide slightly more. A few include the sanctions programme, the designation date, or a link to the source list. Most do not provide enough for an analyst to reach a conclusion without leaving the interface.
This reflects how sanctions screening software works: the screening tool's job ends at the alert. Everything that follows is human work.
What the Analyst Actually Does
The investigation workflow is remarkably consistent across organisations, industries, and screening tools. The steps are the same. The systems are different. The documentation standards vary. But the work is the same work, performed manually, every time.
Step 1: Evaluate the match
The analyst looks at the alert and makes an initial assessment. Is the similarity score high enough to warrant detailed investigation? Is the match obviously wrong (different entity type, different country, clearly different name) or does it require closer examination?
This depends entirely on how much information the screening tool provides. If the tool explains why the match was triggered and what the similarity score means, the analyst can triage efficiently. If the tool presents a name pair and a percentage with no context, the analyst starts from zero every time.
Step 2: Research the screened entity
The analyst needs to establish who the screened entity actually is. This means checking company websites, corporate registration records, and internal systems for data that helps confirm the entity's identity, location, ownership, and business activity.
In most organisations, this information is not assembled in one place. The analyst toggles between the screening tool, a web browser, the ERP system, and sometimes internal shared drives where previous decisions on similar entities might be recorded. There is no single interface that presents the relevant context.
Step 3: Research the sanctions list entry
The analyst examines the sanctioned entity. Which list are they on? Which sanctions programme designated them? What is the scope of the restriction? Does it apply to the type of transaction in question?
An OFAC designation may prohibit all dealings. An EU listing under a different programme may restrict only specific activities. The analyst has to understand these differences, navigate them under time pressure, and apply the correct interpretation to the case at hand. For experienced analysts, this is routine. For regional staff, procurement teams, or other non-specialists who handle screening in decentralised organisations, it is a knowledge gap that slows the process and introduces inconsistency.
Step 4: Compare and decide
The analyst weighs the evidence. Does the screened entity match the sanctioned entity, or are they different parties with similar names? The decision factors include name similarity, geographic overlap, entity type, available identifiers (registration numbers, dates of birth, passport numbers), and any contextual information gathered in the previous steps.
Most of the time, the answer is clear. The entities are in different countries, different industries, or different entity types entirely. These are false positives in sanctions screening, and they represent the vast majority of all alerts.
Sometimes the answer is ambiguous. The names are close, the geography overlaps, and there is not enough public information to reach a confident conclusion. These cases require escalation, additional research, or a judgment call that depends on the analyst's experience and risk tolerance.
Step 5: Document the decision
Every decision must be documented. The reasoning, the sources consulted, the evidence considered, and the conclusion reached must be recorded in a form that can be reviewed later, potentially years later, by an auditor or regulator.
This is the step that suffers most under time pressure. The analyst has already spent five, ten, twenty minutes investigating. Writing a thorough case note adds more time. The next alert is waiting. The documentation gets shorter, less detailed, less consistent.
In many organisations, case notes are written in free-text fields, saved as PDFs to shared drives, or recorded in spreadsheets. There is no structured format, no required fields, no consistent standard for what constitutes adequate documentation.
What This Looks Like at Scale
At eight minutes per alert and a fully loaded analyst cost of €45 per hour, each alert costs over €6 in analyst time. A company with 1,000 alerts per month is spending over €6,000 monthly on investigation alone. Most of those alerts are false positives.
But the cost is not just financial.
Attention degrades
An analyst reviewing their fiftieth alert of the day does not bring the same focus as they brought to their first. The investigation steps are the same, but the rigour decreases as fatigue accumulates. This is not a character flaw. It is a predictable consequence of performing judgment-intensive work at volume without structural support.
Consistency disappears
Two analysts reviewing the same alert may reach different conclusions depending on their experience, their interpretation of internal policy, and the time available. Across regions, shifts, and subsidiaries, the same alert type can produce different outcomes, different documentation, and different audit trails.
Knowledge walks out the door
When an experienced analyst leaves, they take with them their pattern recognition, their familiarity with recurring entities, and their institutional memory of how edge cases were handled. There is no system that captures this. The next analyst starts from scratch.
Documentation falls behind
Under pressure, case notes get thinner. Sources are not recorded. Reasoning is summarised rather than explained. The decision may have been sound, but the evidence trail does not prove it. When a regulator asks to see the file, the compliance team reconstructs rather than retrieves, which is exactly what auditors look for in sanctions compliance.
These problems do not appear in a single alert. They compound across thousands of alerts, across months and years, until the screening program has a detection layer that works and a resolution layer that does not.
Why the Investigation Workflow Has Not Changed
The screening market has evolved significantly over the past two decades. Matching algorithms are more sophisticated. List coverage is broader. Integration options are more flexible. Detection has improved steadily and measurably.
The tools changed. The investigation did not.
An analyst in 2026 investigates an alert using the same basic process as an analyst in 2010. Open the alert. Leave the tool. Search the web. Check the list. Form a judgment. Write a note. Close the case. Move to the next one. This is not because the problem is unsolvable. It is because the screening market defined its scope as detection and never expanded it. Vendors compete on matching accuracy, list coverage, and false positive reduction. The investigation that follows the alert was never part of the product.
The analyst becomes the integration layer between systems that were never designed to work together. They assemble information from the screening tool, the ERP, the web, corporate registries, and internal records into a coherent analysis. They perform this integration manually, for every alert, every time.
What Would Have to Change
The investigation workflow does not need incremental improvement. It needs infrastructure. The analyst should not have to leave the tool to gather context. External data (corporate registries, ownership structures, news, web presence) should be assembled and presented alongside the alert before the analyst opens it.
The analyst should not have to interpret the match from scratch every time. A structured analysis explaining why the alert was triggered, what signals support the match, and what signals contradict it should already exist when the analyst begins their review.
The analyst should not have to write unstructured case notes under time pressure. The documentation should be generated from the investigation itself, capturing every source, every signal, and every decision point in a format that is audit-ready by default.
The analyst should not be reviewing alerts that the system can resolve with high confidence. False positives where the contradiction evidence is unambiguous (wrong entity type, conflicting identifiers, geographic impossibility) should be cleared automatically with a complete reasoning trail.
In most organisations, none of this exists. The investigation is manual, unstructured, and undocumented to a degree that would surprise anyone who has not sat next to an analyst and watched them work.
The Gap Between Detection and Decision
Sanctions screening programs are often evaluated on the quality of their detection. How accurate is the matching? How broad is the list coverage? How low is the false positive rate?
These are important questions. They are also incomplete.
The effectiveness of a screening program is not determined by how well it detects potential matches. It is determined by what happens after detection. How alerts are investigated, how consistently decisions are made, how thoroughly those decisions are documented, and whether the entire process can withstand scrutiny from a regulator.
Most organisations have invested heavily in detection. The screening tool is configured, integrated, and monitored. The investigation process behind it is manual, inconsistent, and largely invisible until an audit surfaces the gaps. The alert is the beginning of compliance work, not the end of it. Detection starts compliance. Resolution proves it.
☰