Most compliance teams try to reduce false positives at the source. That instinct is understandable. It is also insufficient.

False positives in sanctions screening occur when a screened entity shares enough similarity with a sanctions list entry to trigger an alert, but is not the sanctioned party. They represent the vast majority of alerts in most organisations. Every one of them must still be investigated, decided on, and documented by a trade compliance analyst.

False positives are not a malfunction. They are the system working as intended. Sanctions screening is designed to over-match rather than under-match. Missing a genuine match is a compliance failure with legal consequences. Flagging a false positive is an operational cost. Every screening tool on the market reflects that trade-off.

Tuning the matching helps. Cleaning the data helps. Suppressing known false hits helps. But the structural drivers of false positives do not disappear, and the real cost is not that alerts exist. It is what happens after they are generated.

Why False Positives Cannot Be Eliminated

The drivers are structural and persistent, rooted in how sanctions screening software works by design.

Name similarity is real

Many entity names are genuinely close to sanctioned party names. Common words like "National," "General," "International," and "Trading" appear in thousands of legitimate businesses and in dozens of sanctions list entries. No amount of tuning eliminates the overlap.

Transliteration creates unavoidable ambiguity

A single name in Arabic, Chinese, or Cyrillic can produce dozens of plausible Latin-character spellings. The screening tool cannot determine which is correct, so it flags all plausible variants. For organisations with significant non-Latin trade exposure, this is the single largest driver of alert volume.

Entity data is inconsistent

The same business partner may be recorded differently across ERP systems, subsidiaries, and transaction records. Each variation creates a different matching profile and a different set of potential alerts.

Sanctions lists keep growing

Ongoing geopolitical tensions add thousands of new designations across US, EU, and UK lists. More list entries mean more potential matches against the same entity data. Alert volumes are rising even in organisations that have not changed their screening configuration.

These are not problems that can be configured away. They are characteristics of the environment in which screening operates.

What the Standard Approaches Can Actually Do

There are well-established methods for reducing false positive rates at the detection stage. Each one helps. None is sufficient on its own.

Threshold tuning

Every screening tool has a sensitivity setting that determines how similar a name must be to a list entry before an alert is triggered. Raising the threshold reduces alerts. It also increases the risk of missing a genuine match. The right threshold depends on the organisation's risk appetite, trade footprint, and regulatory exposure. Most compliance teams set it once during implementation and rarely revisit it. Threshold tuning is a trade-off, not a solution. It moves the false positive rate in one direction by moving the false negative risk in the other.

Data quality improvement

Cleaner entity data produces cleaner matching results. Standardising name formats, completing missing fields, and correcting misspellings reduce the likelihood of spurious matches.

This is real work with real impact. It is also never finished. Entity data degrades continuously as new records are added, systems are migrated, and different users across different regions enter data in different formats. One organisation reported a 40% increase in false positive hits after an SAP S/4HANA upgrade, not because the matching logic changed but because the data behaved differently in the new environment.

Data quality is a maintenance problem, not a project. It reduces false positives at the margin. It does not eliminate them at scale.

Exclusion lists and whitelisting

Most screening tools allow compliance teams to suppress specific name combinations that have been previously confirmed as false positives. This is effective for recurring hits. It carries its own risk: an exclusion list that is not carefully maintained can inadvertently suppress a genuine match when circumstances change. A new designation, a change in ownership, or an updated list entry can turn a valid exclusion into a compliance gap.

Whitelisting requires ongoing governance. In some organisations, access to the exclusion list is tightly restricted to prevent operational staff from gaming auto-approval thresholds. That governance overhead is its own cost.

Better matching algorithms

Better matching reduces noise. It does not remove it.

Transliteration-aware matching reduces false positives from non-Latin name variants. Entity-type matching distinguishes between individuals and companies with similar names. Multi-signal matching that considers country, address, and identifiers alongside the name produces fewer spurious alerts than name-only matching.

Upgrading matching logic is a valid strategy, particularly when moving from an older tool to a more modern one. But even the best algorithm, applied to the same messy entity data against the same growing sanctions lists, will still produce false positives. The floor is not zero. It is not close to zero.

The Limits of Matching Logic Reduction

Every method described above operates at the detection stage. They all aim to prevent false positive alerts from being generated in the first place. This is worth pursuing. But it has a ceiling, and most organisations are already close to their practical minimum.

The structural drivers (name similarity, transliteration, list growth) ensure a persistent baseline of alerts that no amount of tuning will eliminate. For organisations processing thousands of entities per month, that baseline translates to hundreds of alerts requiring manual review. The question is not whether you can eliminate false positives. You cannot. The question is what happens to each one once it is generated.

Where the Real Cost Sits

The cost of a false positive is not the alert itself. It is the investigation.

A compliance analyst reviewing a false positive follows a consistent workflow. They evaluate the match. They gather external context, typically by searching corporate registries, news sources, and company websites across multiple systems. They check designation details to determine whether the sanctions programme even applies. They make a decision. They document the reasoning and the evidence. The analyst becomes the integration layer between systems that do not talk to each other.

For a straightforward false positive, this takes five to ten minutes. For an ambiguous match, twenty minutes to an hour. At a fully loaded analyst cost of €45 per hour and an average of eight minutes per alert, each false positive costs over €6 in analyst time alone. A company processing 3,000 alerts per month at an 80% false positive rate, 2,400 false positives, is spending over €14,000 per month in analyst time on alerts that are not genuine matches.

Reducing the false positive rate from 80% to 60% through tuning and data quality saves real money. But 60% of 3,000 is still 1,800 false positives per month, each requiring the same manual investigation workflow. The rate improved. The burden remained. Matching logic improvements reduce the number of alerts. They do not reduce the cost per alert. That cost is determined entirely by the resolution process.

The Resolution Lever

The highest-leverage opportunity in false positive management is not generating fewer alerts. It is reducing what each alert costs to resolve. In most organisations, none of the infrastructure for this exists. The investigation workflow is entirely manual, entirely unstructured, and different every time depending on who handles it and when.

Changing that means addressing the investigation itself.

Context assembly

If the external information the analyst needs (corporate registry data, ownership structures, news, web presence) were gathered and presented alongside the alert automatically, the manual research step disappears. That step is where the majority of investigation time is spent.

Structured analysis

If the system evaluates the match against multiple signals (name similarity, geography, entity type, identifiers) and presents a structured assessment, the analyst reviews a completed analysis rather than building one from scratch.

Consistent decision logic

If the same inputs produce the same recommended output regardless of which analyst handles the case, consistency improves and the time spent on judgment calls decreases. The analyst confirms a recommendation rather than reaching an independent conclusion every time.

Automated clearance for obvious non-matches

If a false positive can be identified with high confidence (wrong entity type, conflicting identifiers, geographic impossibility) and cleared automatically with a documented reasoning trail, it never reaches the analyst at all.

These are not matching logic improvements. They are resolution improvements. They operate downstream of the alert, reducing the time, cost, and inconsistency of the investigation process rather than trying to prevent the alert from being generated.

The distinction matters. Fine tuning matching logic has a ceiling. Resolution-stage reduction has a multiplier. Every second removed from the investigation workflow is multiplied across every alert, every month, every analyst.

A Realistic Approach to False Positive Management

A complete false positive strategy works both sides of the problem.

At the detection stage: tune matching thresholds to the organisation's risk profile. Invest in data quality as an ongoing discipline. Maintain exclusion lists with proper governance. Evaluate matching algorithms when selecting or upgrading screening tools.

At the resolution stage: invest in sanctions resolution to reduce the investigation burden per alert. Automate context gathering. Structure the analysis so that the analyst reviews rather than builds. Enable automatic clearance for high-confidence false positives. Capture every decision in a format that does not require reconstruction when an auditor asks for it.

Most compliance programs have invested heavily in the first category. Few have invested at all in the second.

Fine tuning matching logic is necessary. Resolution improvements are decisive. A company that reduces its false positive rate by 20% and reduces its cost per alert by 60% has done far more for its compliance program than a company that achieves a marginally better matching score but still investigates every alert by hand.

Where This Leaves You

False positives in sanctions screening are not going away. The structural drivers ensure a persistent baseline that no amount of tuning will eliminate. The standard reduction methods at the screening stage are necessary. They are not sufficient.

The cost of false positives is not measured in alert volume. It is measured in analyst hours, decision inconsistency, and audit exposure. Reducing that cost requires looking past the screening tool and into the resolution process behind it. You do not control your costs at detection. You control them at resolution.