In finance, trust is everything, but what happens when even our eyes and ears fail us? Recent cases show this is no longer a theoretical concern. Deepfake technology, powered by neural networks trained on massive datasets, can analyze audio, video, and images to produce highly realistic impersonations and edits, creating increasingly convincing fake media. This then could be used by fraudsters to commit crime. example, in one incident, a multinational corporation lost 25 million USD after fraudsters used deepfake video technology to impersonate its CFO during a live video call. In another, British energy executive wired funds after “speaking” with someone he thought was his CEO, only to later discover the voice was synthetic. Scenarios like these are now increasingly common. In a recent article, I co-authored with Dr. Michal Lavi, “Seeing is Believing? Deepfakes in Financial Markets,” we examine this emerging phenomenon, and suggest that more preventive enforcement strategy needs to be used to address this issue.
Our article argues that while much attention has been paid to deepfakes on social media, where they threaten elections, reputations, and democracy, an equally serious danger lies in narrowly targeted deepfake scams in the financial sector. These scams manipulate digital identity in real time, distorting not the past but the present, and their cumulative impact may threaten systemic trust in financial markets themselves.
Financial markets depend on the integrity of information. Traders, institutions, and regulators operate under the assumption that market signals and verified communications are authentic. Deepfakes undermine that very premise. The implications are profound: a fake “executive” can order fund transfers, a forged “CEO announcement” can move stock prices, and a falsified “court video” can influence legal outcomes. In each case, the technology attacks trust, the foundation of the financial system. This poses a significant threat to the stability and resilience of our markets.
Current Regulation Should, Among Other Things, Prohibit Remote-Only KYC For High-Risk Activities
Most legal responses to deepfakes focus on broadcast deepfakes: widely shared fake videos on social media. Laws like the “NO FAKES Act,” the “Take It Down Act,” and the EU’s Digital Services Act (DSA) emphasize labeling, disclosure, and removal of manipulated content. These measures address public misinformation, but they fail to tackle targeted deepfake scams that impersonate real individuals for financial gain. Enforcement aimed at platforms or creators therefore misses the mark, as perpetrators are often anonymous, cross-border, and intentionally non-compliant beyond the reach of labeling rules or takedown procedures.
This asymmetry reveals a deeper weakness in the EU’s enforcement model. While the DSA regulates platforms and the forthcoming AI Act regulates developers, the financial sector’s exposure stems from the use of deepfakes within regulated organizations themselves. Enforcement, therefore, must move inward: from policing speech to reinforcing verification within firms. In our paper we highlight the fact that for broadcast deepfakes, platforms like social networks act as the natural regulators. But in financial markets, banks and payment providers become the gatekeepers of identity. This shifts the focus from ex post enforcement to ex ante safeguards, requiring technological, procedural, and cultural protections to be built directly into financial institutions.
We propose that financial institutions adopt multi-factor, real-time verification that blends biometric, behavioral, and device based checks. The law should also prohibit remote-only KYC for high-risk activities, such as opening accounts or issuing credit cards, and instead require a return to in-person, face-to-face verification procedures. Regulators such as the ECB and ESMA should set minimum standards for deepfake resistant authentication, while national supervisors coordinate through a “Task Force on AI in Financial Services” to harmonize practices. These steps may seem strict, but in an age of AI-driven impersonation, face-to-face verification has become a necessary safeguard.
The EU Framework Currently Does Not Sufficiently Address The Issue
The EU already has strong data and AI governance frameworks, such as the GDPR, the AI Act, and the Digital Services Act, but none adequately address deepfake scams targeting financial institutions.
Under the GDPR, such impersonations clearly violate data-protection principles, yet enforcement is reactive and hinges on identifying offenders, an almost impossible task when scammers act anonymously across borders. The AI Act takes a risk-based approach but focuses on providers, not users who weaponize legitimate tools for fraud.
We propose that the EU classify deepfake enabled identity scams as “high risk” uses under the AI Act, triggering mandatory oversight for institutions using AI-driven identity verification. The European Central Bank, through its Single Supervisory Mechanism, could further impose verification standards on major institutions, much as the DSA does for “Very Large Online Platforms.” This would align deepfake governance with the EU’s broader enforcement logic: addressing systemic vulnerabilities, not just isolated offenses.
However, technology alone cannot fix a problem that is also human. Financial institutions are built on trust. Trust between clients and advisors, as well as between employees and managers. Deepfake scams exploit precisely these social relationships. Thus, organizational culture and training are as critical as any AI tool. We propose that banks and corporations adopt internal norms of “digital skepticism.” Employees must be trained not to trust appearances blindly and to verify unusual requests, even when they appear to come from familiar faces or voices. Regular simulations, awareness campaigns, and reporting protocols should become part of compliance culture, much like anti–money laundering (AML) procedures today.
A Call for a New Enforcement
Ultimately, deepfake-driven scams challenge the very foundations of legal enforcement in the EU. Traditional ex post sanctions, civil liability, or content removal orders cannot keep pace with real-time deception powered by generative AI. What is needed is enforcement by architecture: embedding verification, traceability, and skepticism directly into the digital and organizational code of financial institutions.
This calls for a coordinated European strategy that links the AI Act, the DSA, and financial supervision under a shared goal: preserving trust in financial markets. The European Commission, the ECB, and national regulators should jointly develop standards for authentication and detection of deepfakes, making them part of prudential regulation.
The challenge for EU enforcement is not simply to punish deception after it occurs, but to prevent the erosion of trust before it spreads.