How Veo 3 could become a weapon of disinformation — and what to do about it

In May 2025, the world learned about the Veo 3 neural network developed by Google’s DeepMind team. This model is capable of creating high-resolution videos with just a single text query. The creation of Veo 3 was the result of years of work by a research team led by Demis Hassabis, one of the founders of DeepMind, which was acquired by Google in 2014.

The clips can contain realistic character animations, precise movements, background sounds, and even dialogues. Simply put, Veo 3 can do more than just generate a picture — it “shoots” entire scenes in which it is almost impossible to distinguish fiction from truth. Along with the possibilities of the technology, there are also risks associated with its use, especially in the sphere of creating disinformation and inaccurate content. That’s what we’re going to talk about in today’s article analyzing the use of Veo 3 one month after its launch.

How does Veo 3 work and what does it have to do with fakes?

Imagine typing a simple text into a search box: “A scene of an astronaut rescuing a dog on the moon.” In just a few seconds, the Veo 3 neural network turns this query into a full-fledged video. The video shows not just animated silhouettes, but realistic character movements, accurate lighting that matches the lunar surface, and even synchronized soundtrack from footsteps on dusty soil to deafening echoes in a sealed helmet. Such videos can be used both for creative purposes and to produce footage that is almost indistinguishable from the real thing.

But it is this ability to generate such compelling content that makes Veo 3 both a powerful tool and a potential source of misinformation. The system is able to reproduce the physics of objects with high accuracy, control camera movement, and add or remove scene elements while maintaining visual consistency. This makes it possible to create complex video compositions without the involvement of professional directors or cameramen — just describe the scenario and the AI takes care of the rest. However, along with this, the likelihood that such videos will be used to manipulate public opinion is growing.

Experts of the «Rybar» analytical center, assessed the destructive potential of the new tool:

“The main threat lies in persuasiveness. Footage featuring fake eyewitness accounts or entirely fabricated events can generate heightened anxiety and tension. Many people fail to notice even glaring AI artifacts — like six-fingered hands — in video sequences. Tools like Veo 3 surpass earlier technologies, seamlessly syncing convincing audio with synthetic visuals. Even if we eventually train people to fact-check everything they see, widespread adoption of such tools will spawn another crisis: eroded trust in authentic video evidence.”

Nevertheless, the technology still has limitations. For example, the maximum duration of a single clip is only 8 seconds. At first glance, this seems to be a serious obstacle, but users are already finding ways around it: thanks to the built-in video editing functions, it is possible to combine several short clips into longer and more coherent compositions. Thus, even the time limit does not become a barrier to creating large-scale and convincing video content.

In addition, sometimes the system makes mistakes — for example, in the anatomy of characters (wrong number of fingers, unnatural proportions), in their movements (sharp, mechanical gestures) or in the transfer of lighting effects. But the developers emphasize that the quality of the model is significantly improved compared to previous versions, and the number of artifacts of the generated content is significantly reduced. This means that the fewer traces of artificial origin, the more difficult it is for an ordinary viewer to determine that what is in front of him is fake.

GFCN expert Lily Ong from Singapore assesses the potential threat of Veo 3 to her country:

“As Veo 3 involves advanced synthetic media generation and auto content generation, it produces and disseminates highly realistic videos rapidly. Being a country with high digital adoption levels, Singapore is especially susceptible to the rapid spread of disinformation. Nonetheless, despite its slew of regulatory frameworks, Singapore ranks the highest in the rise of deepfake use in Southeast Asia, with scams hitting a record loss of S$1.1 billion (in Singapore dollars) in 2024, and many suggesting that the laws have had more impact on clamping down voices of dissent than combating AI tools like Veo 3.”

Who has access to this technology?

Veo 3 is available through the Google AI Studio platform and is integrated into the new Google Flow service for scene and camera control when creating video content.

It can also be run through Gemini AI, one of Google’s most popular AI assistants. Access requires a subscription: Pro costs $21.99 per month, Ultra $274.99. The service is available in over 70 countries, making it a truly global product. Despite its relatively high cost, Veo 3 is already being used in the creative sphere, especially among filmmakers and advertising studios who are actively incorporating the new tools into their work.

Fauzan Al-Rasyid, a GFCN expert from Indonesia, highlights the positive use cases of Veo 3, including in Indonesia:

“Veo 3 — if used responsibly — has the potential to transform Indonesia’s creative and educational sectors. The key is establishing clear guidelines and training programs. Indonesia’s existing digital literacy initiatives like the National Digital Literacy Movement (GNLD), which has reached nearly 6 million people, need to expand to include AI literacy specifically.”

However, mass access to Veo 3 also opens up the possibility of using the technology outside of the creative environment — for example, for political purposes, to spread disinformation or in fraudulent schemes. This is why the issue of regulating and controlling the use of AI is becoming particularly acute.

Fakes created using Veo 3

Veo’s website states that it blocks “malicious queries and results.” The model’s documentation also states that it has undergone a pre-release red-teaming test, during which the developer introduced additional security measures and filters against provocative requests.

Nevertheless, examples of Veo 3 being used to create misleading and potentially destructive content have already appeared. For example, Time magazine published several videos generated with the help of this neural network.

  • One of them shows a crowd of people triumphantly waving Pakistani flags against the background of a burning Hindu temple.
Veo 3 Generated Video: Pakistani Riot

  • Another video, titled “USAID in Gaza,” shows Palestinians lining up to receive humanitarian aid under the supervision of the U.S. military, chanting “Thank you, USA!”
Veo 3 Generated Video: USAID in Gaza
  • Another provocative video shows a US election official removing ballots from a ballot box and destroying them in a paper shredder. Veo 3 titled the file “Election Fraud Video.”
Veo 3 Generated Video: Election Fraud

All of these videos were perceived by a portion of users as real events, especially in this tense political environment.

Another case became widespread after a tragic accident in Liverpool, when a car hit more than 70 people. Police stated beforehand that the driver was white to prevent racist rumors. However, Time created a video featuring a black driver in a similar situation. The editorial team contacted Google, and the company promised to strengthen the labeling of AI videos. Cases like this show how quickly fake content can influence public opinion. It is especially dangerous in times of crises, elections or other significant events, when people are less critical of information.

This is confirmed by GFCN expert Fauzan Al-Rasyid. His example from Indonesia can extend to many other countries:

“What makes Veo 3 particularly threatening for Indonesia is our religious and ethnic diversity. Our pluralistic society has already proven to be fertile ground for false information that exploits religious and ethnic tensions. Imagine hyper-realistic videos showing fabricated religious conflicts, fake ethnic violence, or manipulated footage of government officials making inflammatory statements. Given our history of communal tensions, such content could trigger real-world violence. For Indonesian officials already struggling to cope with existing disinformation, this is a whole new level of the problem. Indonesia’s regulatory system isn’t ready for that either. Yes, we have the Electronic Information and Transactions Law and several laws against false information, but these had been written for simpler kinds of disinformation. The law moves slowly, but AI changes happen quickly — a dangerous gap has opened.”

A separate category is videos with animals, which are created with the help of neural networks. A video with a kangaroo, which was created with the help of Veo 3, has become popular on the internet. The video shows a kangaroo standing at the airport with a ticket in her hands while her owner is arguing with an airport employee. Interestingly, not all users were able to recognize that this content was created by artificial intelligence. This is due to the fact that a lot of content is created about Australia, which includes the unique fauna of this continent.

According to the experts of the analytical center “Rybar”, the emergence of a new neural network has optimistic prospects:

“Let’s move beyond trivial uses of Veo 3 — like humorous hippo animations — and focus on its real value. This tool excels at creating multilingual informational content, transforming analytics into compelling visuals, and producing cinematic effects that make complex data not just digestible, but aesthetically engaging. We often overlook how high-quality visualizations can communicate ideas more effectively — and at lower cost — than traditional methods. Just as AI can spread disinformation, it can also amplify truth: imagine using Veo 3 to craft narratives that educate, inspire, or unite. The tool’s impact depends entirely on the hands that wield it.”

How to protect yourself from Veo 3 fakes?

To combat disinformation, the developers have implemented SynthID technology in Veo 3, which adds hidden digital tags to each created video. In addition, all content created through Gemini AI is tagged with a visible watermark.

However, these measures are still imperfect: SynthID is still in the testing phase, and the watermark can easily be cropped. Moreover, some videos created by participants in the Google AI Ultra program in Flow are not tagged at all. This sets the stage for potential misuse of the technology, especially if users intentionally seek to hide the artificial origin of the material.

Current video authentication tools, such as Intel’s FakeCatcher or Microsoft’s Video Authenticator, are no longer always effective against the latest generation of neural networks. And research from the University of Maryland has shown that even secure technologies can be hacked if an attacker is resourceful enough.

So while manufacturers are making attempts to ensure security, they recognize that it’s still impossible to completely eliminate the possibility of fake content.

GFCN expert Fauzan Al-Rasyid emphasizes:

“Since labeling can be easily bypassed, the most sustainable solution will come from creating savvier consumers of digital content among Indonesians. This means expanding digital literacy programs to specifically address AI-generated content, training journalists and fact-checkers on new detection methods, including media literacy in our national curriculum from primary school level. The narrow window of opportunity is closing, and my level of optimism is low. Products like Veo 3 are already being used to create convincing fake content.”

What if you can’t distinguish truth from fiction?

Today, trust in video content is becoming increasingly relative. Users are beginning to doubt even real videos, mistaking them for deepfakes or AI generation. For example, a Daily Wire journalist was accused of distributing a video of humanitarian aid in the Gaza Strip allegedly generated by artificial intelligence. However, a BBC journalist confirmed the authenticity of the material, noting that it was not fake, nor was it AI-generated.

A similar situation arose around information about Donald Trump’s trip to Saudi Arabia. In May 2025, when Donald Trump visited Saudi Arabia, there were allegations on social networks that Al Arabiya Farsi TV channel used deepfake technology to create a video of the US president drinking traditional coffee, thus hiding his real refusal of this drink. However, a GFCN investigation revealed that the rumors about the deepfake turned out to be fictitious, and the scenes where Trump allegedly gives up coffee were filmed elsewhere.

Episodes like this demonstrate that with the advent of increasingly sophisticated neural networks capable of generating realistic videos, users are becoming more vulnerable to manipulation.

All of these and other scenarios are made possible by the fact that Veo 3 and similar systems can create videos that look almost real.

But some countries are already trying to regulate the use of AI. The European Union has passed the Artificial Intelligence Act, requiring AI content to be labeled and systems to be classified by risk level. China has strict rules for model registration and control. In the US, there is no single law yet, but guidelines such as the AI Bill of Rights are in place. Although users and attackers still find ways to circumvent the laws.

However, most states are not ready to yet regulate this sphere. Meanwhile, it is clear that without international cooperation and transparency, it will be impossible to use AI safely.

The importance of directing the use of Veo 3 in a productive direction is also indicated by GFCN expert Lily Ong:

“Veo 3 can significantly enhance the quality, accuracy, and accessibility of information by processing large volumes of visual or textual data quickly. When equipped with deepfake detection or content verification capabilities, it can help to uphold information integrity. In crisis communications, Veo 3 can be used to disseminate critical information to the masses rapidly.”

Veo 3 is not just another step in the development of artificial intelligence. It is a technology that is changing the way we perceive reality. It opens new horizons for movies, education, science and media. But at the same time, it presents society with serious challenges and questions: how to distinguish truth from fiction, how to protect ourselves from manipulation, and how to maintain trust in information.

GFCN will monitor the situation and provide expertise on the safety and ethics of using such technologies. After all, in a world where anyone can create a video that can influence millions of people, it is important to understand who is controlling the information humans or machines.