The Financial Action Task Force (FATF) has flagged artificial intelligence, especially deepfakes, as a fast-growing risk factor for money laundering, terrorist financing, and proliferation financing. The central concern is that synthetic video, audio, and images can convincingly impersonate real people and fabricate supporting evidence, allowing criminals to defeat controls that were designed for traditional fraud patterns and document-based deception.
Deepfakes matter for anti-money laundering because they target the “front door” of compliance: customer onboarding, identity verification, and ongoing account authentication. As more institutions rely on remote onboarding and biometric checks, the incentive to bypass those checks increases. The risk is not theoretical; AI tools are becoming cheaper and easier to use, enabling sophisticated impersonation at scale and reducing the time and cost needed to operationalise fraud schemes.
A key pressure point is the growing dependence on digital ID and biometric verification in KYC workflows. Video-based onboarding, facial recognition, and automated document checks can improve efficiency, but they can also be manipulated with synthetic media if the institution’s assurance model is not robust. When criminals can present a realistic face, voice, or “live” interaction that is algorithmically generated, basic verification steps may no longer provide the intended level of confidence.
FATF’s analysis also highlights that deepfake risk is amplified by uneven detection capability. Many compliance programs and vendor tools were built to validate documents and screen names, not to detect synthetic content in real time. Where detection lags behind creation, criminals can exploit the gap by opening accounts, passing “liveness” checks, and moving funds before red flags are triggered. This creates a short, high-impact window where large losses and rapid laundering can occur.
The cross-border nature of modern finance increases the challenge. Remote onboarding often involves customers, devices, or transaction pathways spanning multiple jurisdictions, each with different assurance expectations and investigative capacity. Criminals can select the weakest link, whether that is a less mature digital ID ecosystem, inconsistent onboarding standards, or limited monitoring sophistication,and then move funds across networks that complicate tracing and recovery.
Several recent typologies illustrate how deepfakes can translate into real financial losses and laundering flows. One widely cited pattern is executive impersonation during video calls, where criminals simulate senior leadership and pressure staff to execute urgent payments. In one case, a CFO was reportedly induced to transfer around USD 25 million after a convincing deepfake-enabled meeting, demonstrating how synthetic media can defeat standard “call-back” logic when organisations treat video as inherently trustworthy.
Deepfakes also strengthen social engineering at scale, particularly in romance and investment scams. Criminal networks can use synthetic identities, supported by AI-generated photos, voice notes, and live video, to build credibility quickly, convince victims to send funds, and then route proceeds through accounts and channels that obscure origin. Where the scam involves crypto, proceeds may move into virtual assets and then into wallets or services that make recovery and attribution more difficult.
Another emerging scenario involves fabricated “trusted media” and reputational impersonation. Deepfake videos that mimic reputable presenters or news formats can be used to promote fraudulent investment opportunities, including fake IPO promotions or “exclusive” token sales. These campaigns can generate high volumes of victims in short periods, with funds rapidly shifted across accounts, converted into virtual assets, and dispersed to reduce traceability.
Across these examples, the AML theme is consistent: deepfakes do not merely create new fraud narratives; they can undermine verification itself. If synthetic content can pass onboarding checks and ongoing authentication, then criminals can create or take over accounts more easily, recruit mules more efficiently, and accelerate the placement and layering of illicit proceeds. This increases pressure on both preventive controls (CDD/KYC) and detective controls (transaction monitoring, typology detection, and investigations).
FATF connects these threats to core expectations around customer identification and verification, including the need for reliable, independent sources and a risk-based approach to onboarding. The implication is that digital ID and biometric tools must be evaluated not only for convenience and conversion rates, but also for resilience against synthetic media. Organisations should treat deepfake capability as a material factor in their customer risk assessment, product/channel risk assessment, and control design.
On the response side, FATF points to a combination of technical and organisational measures rather than a single “silver bullet.” Stronger identity assurance is one pillar, particularly more robust liveness checks and layered verification approaches that reduce reliance on any single signal. Multi-factor authentication, device and behavioural signals, and step-up verification for higher-risk events can help mitigate impersonation risk, especially where initial onboarding is fully remote.
A second pillar is detection capability that is designed for AI-enabled deception. This includes tooling that can identify inconsistencies in video and audio, detect synthetic artifacts, and flag suspicious onboarding sessions for manual review. However, technology alone is insufficient if staff are not trained to recognise deepfake-enabled social engineering. Training should cover the operational reality that “seeing is no longer believing” and should be integrated into fraud prevention, onboarding operations, and AML escalation processes.
A third pillar is adaptive monitoring and investigations. As criminals use AI to mimic “normal” activity and blend into legitimate patterns, transaction monitoring may need to rely more on behavioural anomalies, network relationships, and cross-channel indicators, rather than static rules alone. Investigations may increasingly require collaboration with cybercrime specialists, digital forensics capacity, and, where virtual assets are involved, effective blockchain tracing and wallet risk analytics.
FATF also highlights the importance of public–private cooperation. Deepfake typologies evolve quickly, and individual institutions may only see fragments of a broader campaign. Structured information sharing, through financial intelligence units, industry forums, and cross-sector partnerships, can help accelerate detection and shorten the time between new criminal methods emerging and controls being updated. This is particularly relevant for coordinated scams that target multiple institutions or exploit the same onboarding vulnerabilities across providers.
Beyond deepfakes, FATF flags wider AI-driven risks that can support laundering and evasion. Generative AI can produce convincing fake documentation and narratives that strengthen fraudulent onboarding and fabricated source-of-funds explanations. Predictive and discriminative models can be misused to design transaction behaviours that imitate legitimate activity, potentially reducing the likelihood of alerts. AI agents, in the forward-looking assessment, could automate elements of criminal operations, such as mule recruitment, transaction routing, and iterative testing of controls, making laundering workflows more scalable and adaptive.
A practical implication for compliance leaders is that AI risk should be treated as a strategic capability gap, not merely a tactical fraud issue. Governance should clarify accountability for digital ID assurance, vendor oversight, and model risk management for any AI tools used in compliance. Control testing should include scenario-based exercises that reflect synthetic identity and impersonation attempts, and incident response plans should anticipate high-velocity campaigns that require rapid containment and coordinated reporting.
For VASPs and crypto-facing financial institutions, the message is particularly direct: scams and impersonation often serve as the predicate for laundering flows that enter virtual assets, where rapid conversion and cross-border movement can occur. Strengthening onboarding and monitoring at the fiat–crypto interface, improving customer communication to reduce scam success, and enhancing investigative readiness are increasingly central to managing financial crime exposure.
The overarching conclusion is that AI is changing the economics of deception. Deepfakes and generative tools can scale impersonation, shorten criminal learning cycles, and erode trust in digital interactions. Effective AML resilience will depend on layered identity assurance, modernised monitoring approaches, trained teams, and stronger cooperation across the public and private sectors, because the threat is evolving faster than traditional control cycles were designed to handle.
Source: Horizon Scan AI and Deepfakes
Join SwapED today and save 20% on all plans. Use Code SWAPED20 at checkout.