Sponsored Content

The $25 Million Video Call: Deepfakes Are Now a KYC Crisis

There is a troubling paradox at the center of this threat. The moment of highest institutional trust in a digital financial transaction is now also the moment of highest risk.

By Kunal Devrasen | Apr 02, 2026
Kedar Kulkarni, CEO of HyperVerge.

Earlier this year, a video began circulating online featuring Sundararaman Ramamurthy, the CEO of the Bombay Stock Exchange. In the clip, he appeared to be offering stock tips. It looked real enough to convince many who saw it. Unfortunately, it was a deepfake, and the extent of damage caused remains unknown.

In 2024, an employee at Arup in Hong Kong joined what seemed like a routine video call. Colleagues were present. Faces were familiar. Instructions were clear. By the end of that call, $25 million had been transferred. Every person on that call was generated by AI.

These were not isolated incidents cooked up by sophisticated nation-state actors. They were not edge cases from a distant threat landscape. They were a preview. Deepfakes have crossed a threshold that the financial sector has not yet reckoned with: they are now good enough, cheap enough, and accessible enough to defeat the very systems we built to stop fraud. After infiltrating e-commerce, insurance, and bank account phishing, the next target is KYC.

There is a troubling paradox at the center of this threat. The moment of highest institutional trust in a digital financial transaction is now also the moment of highest risk.

KYC was built on visual certainty. The check was based on a live face or a specific interaction, such as making a certain gesture.

That assumption is now outdated. Modern deepfakes do not try to bypass verification; they pass it.

Controlled studies have shown that people are wrong three out of four times when the deepfake is good enough.

This flips the risk model.

The most dangerous moment in a fraud attempt is no longer when a system is uncertain. It is when it is fully convinced.

Liveness checks, standard video KYC sessions, or even a call with a senior executive are no longer safe checks. They are targets.

India is more exposed than it knows

India is the world’s largest experiment in remote-first financial inclusion. In April 2025, the cumulative e-KYC transactions in India crossed 2393 crores. Hundreds of millions of people were onboarded to banking, lending, and insurance entirely through digital KYC, at a scale no other country has come close to replicating. That ambition created something remarkable: a financially included population connected to formal institutions at historic speed.

It also created an attack surface of historic scale.

Globally, 53% of financial professionals report encountering deepfake scam attempts already. Parallely, deepfake incidents are growing at triple-digit rates year over year.

Now consider that scale applied to India’s onboarding infrastructure.

The scale of the threat is unprecedented. And the impact is not just limited to misinformation. When the sophistication of deepfakes is aimed at KYC instead of social media, the impact shifts from perception to direct financial loss.

Fraud has been democratised

Adding another layer to the problem is the fact that deepfake fraud is no longer expensive or specialised.

Between 2023 and 2025, the number of deepfake files exploded from 500,000 to over 8 million. Fraud attempts involving deepfakes surged by 3,000% in a single year.

The barrier to entry has collapsed. Tools that can clone faces, voices, and expressions are widely accessible. Even industry experts acknowledge that replicating a voice or video now requires very little technical skill.

We have seen this pattern before.

Phishing followed the same trajectory. It started as a niche tactic. It became industrial once tooling improved. Then it became ubiquitous.

Deepfakes are entering that phase now.

Regulation is built on an outdated assumption

We know that the threat is growing, but unfortunately, regulation is not keeping up with the pace.

The Reserve Bank of India’s V-CIP framework assumes that video equals presence. Recent attacks break that assumption completely. Not just one identity, but an entire meeting was fabricated in real time.

If a multi-person executive call can be simulated, a one-on-one KYC session is no longer a meaningful proof of presence.

This creates a structural lag between regulation and reality.

Detection today has to move below the surface level. Not what is seen, but how it is generated. Detection has to be able to identify pixel-level inconsistencies, temporal mismatches, and injection patterns.

Without that shift, compliance can be met while fraud still succeeds.

The liability question is unresolved

When deepfake fraud intersects with regulated KYC, accountability becomes unclear. If a fraudulent user passes a compliant onboarding process, who is responsible?

The bank that approved the account, the vendor that supplied the verification system, or the framework that defined the process as sufficient?

There is no established precedent.

This matters because financial losses from deepfake fraud are already accelerating. In the first half of 2025 alone, reported losses reached over a billion US dollars.

The first major legal dispute in India over deepfake-resistant onboarding will not just assign blame. It will redefine how liability is distributed across the ecosystem.

Right now, the industry is unprepared for that moment. Ideally, all players in the industry should bridge that gap by delivering the highest standards and accepting accountability. However, that may be unfeasible in reality.

In this case, regulation alone can pave the way with guidelines and clear responsibilities to safeguard individual as well as enterprise financial interests.

You cannot train your way out of this

The Arup employee was not untrained. The environment was not obviously suspicious. The interaction followed expected patterns. And yet, $25 million was transferred.

This is the core limitation of human-centric or human-in-the-loop defence.

Even experts struggle. Some studies show that detection systems themselves lose significant accuracy when exposed to real-world deepfakes, and human observers often perform worse.

Training improves awareness. It does not solve perception limits. Defence has to be embedded into the system itself.

A resilient KYC architecture does three things differently:

  • It uses passive liveness detection that does not rely on user prompts
  • It detects injection attacks where synthetic inputs are fed directly into the pipeline
  • It layers behavioural and device-level signals that are difficult to replicate consistently

These controls operate silently. The focus is to ensure that legitimate users do not notice them, and attackers cannot bypass them. That asymmetry is what matters.

The decision point is here

The incident involving the Bombay Stock Exchange should have been a warning. The frequency of such incidents clarifies the stakes.

The scale of the problem is no longer hypothetical. Deepfake incidents are rising at triple-digit rates. Financial losses are compounding. Detection, both human and automated, is struggling to keep up.

The question is no longer whether deepfakes can defeat KYC. They already can.

The question is whether the financial sector and regulations will adapt before losses, legal exposure, and regulatory intervention force that change.

KYC was designed to create trust at scale.

Deepfakes are now testing whether that trust was ever as strong as it seemed.