Sanctions screening software compares entity names against government sanctions lists to detect potential matches. When a match is found, the system generates an alert for human review.
This article explains how sanctions screening works in practice: what triggers it, how matching algorithms identify potential hits, why false positives occur at the rates they do, and where the screening tool's responsibility ends.
What Sanctions Screening Software Does
Sanctions screening software takes an entity name and supporting data (country, address, registration number) and compares it against government sanctions lists to determine whether a potential match exists.
Sanctions lists are registers of individuals, companies, and entities that businesses are legally prohibited or restricted from dealing with. The major regimes include OFAC (the US Treasury Department's Office of Foreign Assets Control), the EU Consolidated List, the UK Sanctions List, and the UN Security Council Consolidated List. These lists change frequently, sometimes daily.
If the software identifies a potential match, it generates a sanctions alert. An alert is not a confirmed hit. It is a notification that the screened entity may correspond to a list entry. The alert requires human review to determine whether the match is genuine or a false positive.
When Screening Is Triggered
Screening is not a one-time event. It occurs at multiple points across the trade lifecycle.
Onboarding: new customers, suppliers, and business partners are screened before any transaction takes place.
Transactional screening: many organisations screen at the point of transaction. When a purchase order, sales order, or shipment is created, the counterparty is screened again.
Ongoing monitoring: sanctions lists change frequently. Ongoing monitoring rescreens existing business partners whenever relevant lists are updated, catching new designations without waiting for the next transaction.
Ad hoc screening: compliance teams perform manual checks outside the automated workflow, typically through the screening tool's web interface.
A single business partner may be screened dozens of times per year. Each screen that produces a potential match generates an alert that requires review.
How Matching Algorithms Work
Matching is designed to be broad. Missing a true match is a compliance failure. Flagging a false positive is not.
That principle explains why screening tools behave the way they do, and why the alert volumes they produce are a feature of the system rather than a flaw.
Exact matching compares the screened name character by character against list entries. This catches identical names but misses anything even slightly different. In practice, entity data is rarely clean enough for exact matching alone to be sufficient.
Fuzzy matching calculates a similarity score between the screened name and list entries, allowing for variations in spelling, spacing, and formatting. A fuzzy match might flag "Huaxin Industries" against "Hua Xin Industries" even though the strings are not identical. The sensitivity threshold is typically configurable.
Phonetic matching compares how names sound rather than how they are spelled. This matters for names that are spelled differently across documents but pronounced the same way. Algorithms such as Soundex and Metaphone are commonly used.
Transliteration matching is critical for entities with names in non-Latin scripts. A single Chinese, Arabic, or Cyrillic name can have dozens of plausible Latin-character spellings. Screening tools that handle transliteration natively can match across language variants. Tools that do not will either miss matches or require the compliance team to manually enter every possible spelling.
Most screening tools combine these methods. The quality of the combination, and how transparently the tool explains why a match was flagged, varies significantly between vendors.
The Configuration Problem
Matching algorithms do not run on default settings and produce reliable results. They require configuration, and the quality of that configuration directly determines both the false positive rate and the risk of missed matches.
The primary decision is matching sensitivity. Set it too high and the system floods analysts with false positives. Set it too low and genuine matches slip through. Finding the right balance requires specialised knowledge of both the screening tool and the sanctions data. Most compliance teams do not have this expertise in house.
The problem compounds during system changes. ERP migrations can cause sudden and significant increases in false positive rates. One organisation reported a 40% increase in false positive hits from the same data source after an upgrade. The matching logic behaves differently in the new environment, and diagnosing why requires specialised personnel who are not always available to the compliance team.
In some tools, the matching logic is a black box. The alert fires but there is no visibility into which algorithm triggered it, what similarity score was calculated, or why this particular combination was flagged.
Most organisations are not optimising matching. They are managing the consequences of it.
Why False Positives Happen
False positives are not an error. They are a byproduct of how screening works.
A false positive occurs when a screened entity shares enough similarity with a sanctions list entry to trigger an alert, but is not the sanctioned party. They are the single largest source of operational burden in sanctions compliance.
Common names
Many entity names are genuinely similar to sanctioned party names. "Al-Hassan Trading" could match dozens of list entries. A European distributor with "National" or "General" in its name may trigger alerts against entries from entirely different countries.
Transliteration ambiguity
A single name in Arabic, Chinese, or Cyrillic can be transliterated into Latin characters in many ways. Each variant is a potential match. The screening tool flags all plausible matches because it cannot determine which transliteration is correct. This is the primary driver of false positives in sanctions screening for organisations with significant non-Latin trade exposure.
Abbreviations and formatting
Entity names are rarely stored consistently across systems. "Zheijiang Huaxin Industries Co., Ltd." in one system may appear as "Zheijang Huaxin Ind." in another. Every variation creates a different matching profile.
Data quality
Misspelled names, incomplete addresses, missing country codes, and inconsistent formatting all increase the likelihood of false matches. Master data quality is the foundation of effective screening, and it is rarely as clean as the compliance team needs it to be.
False positive rates vary widely. Under 5% in well-tuned, low-volume environments. Over 90% in organisations with high trade volumes and broadly configured thresholds. Even at the lower end, the absolute volume of alerts at scale ensures that false positive resolution remains a persistent operational cost.
What Screening Software Does Not Do
The tool identifies the problem. The analyst solves it. This is where most descriptions of sanctions screening stop, and where the real operational burden begins.
Every alert, whether a genuine match or a false positive, must be investigated, decided on, and documented. That process, sanctions resolution, falls entirely on human analysts.
The screening tool does not gather the external context the analyst needs. It does not check corporate registries, ownership structures, news sources, or company websites. It does not assess whether the sanctions programme that designated the list entry applies to the transaction in question. It does not produce a structured analysis explaining why the match is or is not genuine. And it does not generate the audit-ready documentation that regulators expect when they examine how a specific alert was handled.
The analyst does all of this manually. They toggle between the screening tool, web browsers, internal systems, and sometimes spreadsheets or shared drives to assemble information, form a judgment, and record the outcome. The sanctions alert investigation process turns the analyst into the integration layer between systems.
For a straightforward false positive, this takes five to ten minutes. For an ambiguous match involving a sensitive jurisdiction, layered ownership, or limited public information, it can takeseveral hours.
At scale, the investigation and documentation burden becomes the dominant cost in the screening program. The screening tool accounts for a fraction of the total time spent. The resolution process accounts for the rest.
What Screening Software Costs
The cost of screening is not the software. It is the alerts.
Screening software is typically priced per user, per entity screened, or as a platform subscription with tiered pricing based on volume. The license cost is real but manageable.
The larger number is analyst time. At a fully loaded analyst cost of €45 per hour and an average investigation time of eight minutes per alert, each alert costs over €6 in analyst time alone. A company processing 3,000 alerts per month at an 80% false positive rate is spending 90 to 120 hours of analyst time monthly on false positives. That is before any genuine matches are investigated.
Most mid-market companies employ one to four dedicated compliance analysts. In many of these organisations, screening resolution consumes a material share of total team capacity.
The software license is the smaller number. The investigation burden is the larger one. Most vendor conversations focus on the first. Most operational pain lives in the second.
What to Look for When Evaluating Screening Software
Most evaluations focus on detection quality. Few examine what happens after the alert is generated.
The familiar criteria still matter. Which sanctions lists does the tool cover? At minimum: OFAC, EU Consolidated, UK Consolidated, and UN Security Council. How does the tool handle fuzzy, phonetic, and transliteration matching? Can the compliance team see why a match was triggered? How much manual tuning does the matching logic require, and what happens to the configuration during system migrations?
Integration matters. Does the tool connect to the company's ERP, trade management, or CRM systems via API or batch upload? The maintenance burden of the integration matters as much as its existence.
But the most important question is the one most often skipped, and it sits at the heart of how to evaluate sanctions screening software: what happens after the alert? Does the tool provide any investigation support, or does it stop at detection? How are decisions recorded? Can the audit trail be exported? Where does the tool's responsibility end and where does manual effort begin?
That boundary defines the true operational cost of the screening program. Understanding it before signing a contract is the difference between buying a detection tool and building a compliance process.
The Line Between Screening and Compliance
Sanctions screening software is a necessary component of any trade compliance program. It performs a critical function: identifying potential matches so that no transaction proceeds unchecked.
But screening is the starting point, not the finish line.
Most organisations have invested significantly in detection. Far fewer have invested in the resolution layer behind it. A screening tool can be well configured, broadly covered, and tightly integrated. The compliance process behind it can still be manual, inconsistent, and difficult to defend under audit.
Screening ensures you detect potential risk. Resolution determines whether you can defend your decisions.
☰