A screening tool does not pass an audit. A compliance process does.
Most companies prepare for audits by proving their screening works. That is not what auditors are testing. The screening tool is the minimum expectation. What they examine is everything that happens after the alert is generated: who investigated it, how, and whether the evidence trail survives scrutiny.
What Auditors Are Actually Evaluating
They test whether your decisions can be defended. That test breaks down into four areas.
Coverage: Is every relevant counterparty being screened? Across all systems, all subsidiaries, all transaction types? Are there gaps where entities enter the business without being checked?
Decision quality: When an alert was generated, was the investigation thorough? Did the analyst have access to enough information to make a sound judgment? Was the decision reasonable given the available evidence?
Consistency: Would a different analyst, reviewing the same alert with the same information, have reached the same conclusion? Are the standards applied uniformly across regions, teams, and time periods?
Documentation: Is the reasoning behind each decision recorded? Can the compliance team produce the evidence trail for a specific alert, for a specific counterparty, on a specific date, without manual reconstruction?
The first question is about the screening tool. The other three are about sanctions resolution.
The First Question: Which Lists and How Often
Auditors will ask which sanctions lists the organisation screens against. At minimum: OFAC, EU Consolidated, UK Consolidated, and UN Security Council. They will ask how frequently screening occurs, whether it covers transactions as well as onboarding, and whether ongoing monitoring is in place when lists are updated. They will ask about the data provider and update frequency.
Most organisations answer these questions comfortably. Modern screening tools handle this automatically. How sanctions screening software works is well-understood, and list coverage is a solved problem for anyone using a mainstream vendor.
The Real Examination: What Happened After the Alert
The substantive part of a sanctions audit focuses on alert handling. This is where programs succeed or fail.
Can you show the investigation for a specific alert?
An auditor will select individual alerts and ask to see the full case file. What information did the analyst have? What sources were consulted? What was the reasoning? What was the decision?
In organisations with strong resolution infrastructure, this is a retrieval exercise. The case file exists, it is structured, and it can be produced on demand. In most organisations, it is a reconstruction exercise. The analyst's decision may be recorded in the screening tool. The supporting evidence may be on a shared drive. The reasoning may be in a free-text field, an email, or nowhere at all. Assembling the complete picture for a single alert can take hours.
One compliance professional described the current state directly: "When an audit occurs, auditors must check multiple systems to reconstruct the case."
Auditors know this. When they encounter fragmented documentation, they do not assume the decision was wrong. They note that the decision cannot be verified. If it cannot be verified, it does not exist from an audit perspective.
Can you demonstrate consistency?
Auditors do not look at individual alerts in isolation. They look for patterns.
If the same entity was screened multiple times, were the outcomes consistent? If different analysts reviewed similar alerts, did they apply the same standards? If the organisation operates across regions, are the documentation practices comparable? Inconsistency tells the auditor that outcomes depend on the person, not the process. That is not necessarily a violation, but it is a finding. Findings accumulate.
The consistency question is particularly sharp for decentralised organisations where screening is handled by regional staff, procurement teams, or other non-specialists. If the head of trade compliance in Munich and a regional manager in Shanghai are both making sanctions decisions, the auditor will want to see that both are operating to the same standard. In most cases, they are not.
Can you prove that nothing was missed?
The hardest question in an audit is not how you handled alerts. It is whether you handled all of them. It is not enough to show that investigated alerts were handled correctly. The auditor also wants confidence that every relevant entity was screened in the first place. Are there transactions that bypassed screening? Systems where screening is not enforced? Onboarding paths that do not trigger a check?
In organisations with fragmented ERP landscapes, multiple subsidiaries, or manual screening triggers, the answer is often uncertain. Screening may be comprehensive in the primary system and absent in secondary ones. The compliance team may believe coverage is complete. The audit may reveal otherwise.
Where Most Programs Fall Short
The pattern across audits is consistent. Detection is usually adequate. The gaps are in what follows.
Documentation
Case notes are thin, inconsistent, or missing. Free-text fields contain brief conclusions without supporting reasoning. Sources are not cited. The decision rationale for a cleared false positive reads "no match" with nothing explaining how that conclusion was reached.
Auditors do not accept screenshots as sufficient evidence. They expect system-generated records: what was screened, what was found, what was investigated, what was decided, and why.
Consistency
Different analysts document differently. Different regions apply different standards. The same alert type produces different outcomes depending on who handles it and when. Without structured investigation workflows, consistency depends entirely on individual discipline. At volume, that is not a reliable control.
Fragmentation
The screening tool holds the alert. The shared drive holds the PDF. The email thread holds the discussion about the edge case. The ERP holds the transaction data. No single system captures the complete decision record.
When a regulator asks to see the file for a specific counterparty, the compliance team locates and assembles fragments from different systems, often without a clear index of where each piece lives. This signals to the auditor that the program was not designed with retrievability in mind.
What an Audit-Ready Program Looks Like
The difference between a program that survives an audit and one that does not is rarely the screening tool. It is what sits behind it.
Structured documentation by default
Every alert produces a case record that captures what was screened, what was found, what sources were consulted, what the decision was, and why. This record is generated as part of the investigation workflow, not written after the fact under time pressure.
Consistent investigation standards
The same alert type, reviewed by any analyst in any region, produces the same depth of investigation and the same documentation quality. This requires structured workflows, not training alone.
Single retrievable record
The complete decision trail for any counterparty, any alert, any date is accessible from one place. No reconstruction. No assembly across shared drives and email threads. When the auditor asks, the answer is immediate.
Complete coverage evidence
The organisation can demonstrate that every relevant entity was screened, across all systems and subsidiaries, with no gaps. Screening is triggered by system logic, not human memory. Most compliance programs have the screening tool in place. Few have the resolution infrastructure to make it audit-proof.
The Audit Is Not the Risk
The real risk is not the audit itself. It is operating for years with a resolution process that cannot withstand one.
Every alert investigated inconsistently, every case note left thin, every decision documented in a format that cannot be retrieved. These are not problems that appear on the day of the audit. They accumulate quietly, across thousands of alerts, across months and years. The audit simply makes them visible.
Most compliance programs are built around detection. The screening tool is configured, the lists are current, the matching is active. That foundation is necessary. It is not what gets tested.
An auditor does not evaluate your screening tool. They evaluate your decisions. Detection shows intent. Resolution proves control.
☰