Wave of deepfake fraud: how technology is changing the threat landscape

Wave of deepfake fraud: how technology is changing the threat landscape

Over the past year, Russia has experienced a deepfake epidemic, with their prevalence surging by 160%. Alarmingly, 90% of these fabricated media now target political and security issues. Yet research reveals that only 50% of Russians can identify deepfakes, all while one in three citizens encounters AI-powered scams weekly. This GFCN investigation explores how synthetic media is transforming the information landscape and why global efforts are failing to curb this growing threat. Read the extended article by «Kommersant» newspaper below.

What are deepfakes?

Deepfake technology utilizes machine learning and artificial intelligence to generate synthetic media, enabling the manipulation of faces and voices in video or audio recordings. 

According to the research carried out by the Global Fact-Checking Network (GFCN), politically and militarily motivated disinformation aimed at influencing public opinion is most prevalent in the Russian-language part of the Internet. According to GFCN analysts, the creators of such fake news most often use the heads of Russian regions to star in their videos.

Catastrophic growth in the number of digital fakes

According to the Global Fact-Checking Network (GFCN), deepfake proliferation has reached alarming levels, with synthetic media production growing at an unprecedented and accelerating rate.

Thus, in just the first three months of 2025, Russian authorities identified 61 unique deepfakes and 2,300 copies circulating throughout the country. This represents a 65% increase compared to the total number of fakes detected in all of 2024. Moreover, this is 2.6 times the amount of deepfakes detected in all of 2023.

In 2025, political topics, security issues and the activities of law enforcement agencies became the dominant fields for the spread of Russian-language deepfakes — 89% of all cases. 

Deepfakes are frightening, but remain a mystery

Despite the widespread proliferation of deepfakes, only 50% of Russians are aware of this form of digital manipulation. These findings are supported by a survey conducted by ANPO “Dialog Regions”* in February 2025, which polled 3,600 respondents using the River Sampling method — a research approach where participants are recruited specifically for each study rather than drawn from pre-existing databases.

The survey revealed a significant knowledge gap: even among respondents familiar with the term “deepfake” only 73% could accurately define it. Thus, only one in three Russians has full knowledge of this phenomenon, although 53% of respondents recognized the spread of deepfakes as a threatening phenomenon.

The primary concern is the potential misuse of this technology, leading to fraudulent operations, illegal activities, and public opinion manipulation. 

Nearly 43% of participants believe deepfakes primarily endanger gullible individuals lacking the skills to critically evaluate information. In this context, 54% of respondents supported legal initiatives to regulate deepfake usage.

From Phishing to AI Fraud: How Russians Are Protecting Themselves From Digital Fraud

Nearly 80% of respondents reported personal experience with fraudulent activities. Moreover, one in three admitted encountering such scams on a near-weekly basis. The most prevalent scams involve fraudulent phone calls, phishing attempts, illegal online job offers, loan requests from those posing as acquaintances, and rigged lotteries or contests.

A striking 90% of respondents reported receiving suspicious calls or messages, with 61% encountering them frequently — at least weekly. One in ten respondents had experienced a scam where artificial intelligence played a key role. The most common forms of such deception were fake calls from acquaintances supposedly asking for financial assistance, as well as calls imitating contact centers of banks or government agencies.

Analytical data indicate that falsified information remains a pressing societal challenge, even though it steadily loses its credibility.

Public support is growing for legislative initiatives to regulate deepfakes, label fabricated content, and combat disinformation — and the appropriate actions are already being taken. For instance, the Russian State Duma is currently drafting legislation that would classify the use of deepfakes as an aggravating factor in both fraud and defamation cases.

International Collapse of Trust: Why Deepfakes Threaten Global Security

The deepfake diplomacy problem concerns not just Russia but the whole world. International fake content is multiplying dangerously, evidenced by the March 23 social media emergence of an audio recording where U.S. Vice President J.D. Vance purportedly criticizes businessman and presidential advisor Elon Musk. Following the recent U.S. election, researchers have detected a sharp increase in fake news mentioning the country’s newly elected president, Donald Trump.

Notably, creators of such fabricated content employ diverse AI capabilities. The Zephyr monitoring system for audiovisual materials and deepfakes has identified multiple instances where generative technologies were maliciously used to manipulate biometric data in video content. The analysis covered both partial modifications (lower-face alterations with lip synchronization) and complete face swaps on synthetic images. Some cases additionally exhibited post-production video fragment manipulation.

Today, deepfake technology has clearly evolved from an exotic novelty to a potent weapon — equally dangerous in information warfare and everyday fraudulent schemes. The prevalence of political deepfakes is especially concerning — these manipulated videos of officials and celebrities regularly go viral, accumulating hundreds of thousands of views while destabilizing public discourse.

Yet, surveys show that global society is not well prepared for this new threat. While the authorities are trying to respond in the legal field, experts warn that without mass education in digital literacy and international cooperation, the wave of deepfake threats could erode the very concept of truth.

* The all-Russian online survey was conducted on February 25-27 using River Sampling online.
3,600 people aged 18 and over were interviewed. The sample is representative of the Russian population.

(c) Article cover photo credit: Ministerie van Buitenlandse Zaken