Your face is not yours: how your digital self can be stolen and how to resist it

Until recently, the technology of deepfakes and voice synthesis seemed futuristic plots from science fiction. Today they have become a reality accessible to everyone. A sudden phone call from a «distressed relative», incriminating photos from a party you’ve never been to,» evidence » in court fabricated in a couple of clicks — all these are no longer hypotheses, but documented cases around the world. Read about the threats of «image theft» and how to protect yourself from it in our article dedicated to the International Data Privacy Day.

Artificial intelligence is no longer just a tool for creativity or entertainment. Today, it has become a weapon for stealing the digital image, allowing attackers to steal and forge the most valuable thing that a person has in the digital age — his voice, face and reputation. Scammers use AI to extort money, and in conflict situations, people create fake evidence to settle personal accounts or pressure partners.

Cases involving the use of AI to fake audio, video, or correspondence for the purpose of fraud or revenge are becoming more common. While legislation around the world is trying to catch up with the rapid development of technology, attackers are taking advantage of the legal vacuum. The following stories are a clear illustration of how the theft of a digital image with the help of AI already cripples fate, remaining in the «gray zone» of the law.

Call fraud

In March 2019, fraudsters used AI to generate the voice of the director of a large German energy concern. They called the manager of a British subsidiary, ordering him to immediately transfer €220,000 to the account of a Hungarian supplier. The voice was so convincing that the manager followed the instructions. The money was then transferred to other countries and partially cashed in Mexico.

Experts emphasize that law enforcement agencies and legislators do not have time to respond to this type of threat. Tracing calls and transactions initiated by criminals from other countries is extremely difficult, and victims often have no leads left to investigate. Technology companies that create voice synthesis tools are still practically not responsible for their misuse.

Scammers are increasingly using artificial intelligence to create realistic voice clones in order to extort money by posing as relatives in distress. Now, just a few seconds of audio recording, which can be found on social networks, is enough to create a convincing copy. According to the US Federal Trade Commission, such scams are becoming one of the most common, bringing criminals millions of dollars.

In 2023, an elderly couple from Canada almost fell victim to fraudsters using voice cloning technology. The seniors received a call from a man whose voice was almost indistinguishable from the voice of their grandson Brandon. The «grandson» said that he was in prison and urgently needed money for bail. Panic-stricken, the couple withdrew the maximum amount possible from the bank and were on their way to another branch when the bank manager stopped them, warning them that another customer had received a similar call with a fake voice. This allowed the couple to avoid financial losses.

The best protection strategy in case of such calls is to immediately interrupt the conversation and call your relative back on your own at a known number, do not panic and never transfer money through cryptocurrency or gift cards. However, the main problem remains: technologies are developing faster than security measures against them, leaving users, especially the elderly, vulnerable to new sophisticated types of deception.

Voice and email forgery for false accusations

In 2024, an employee of a school in Maryland (USA), Dajon Darien, used AI technology to slander his boss in retaliation for the fact that he decided not to renew his contract with him. Darien created a fake audio recording in which the headmaster’s voice was generated by artificial intelligence. On this recording, the «director» allegedly made racist and anti-Semitic comments. The fake was spread on social networks and caused serious consequences: mass riots at the school, threats against the real headmaster and the need to strengthen the police presence.

The investigation found that Darien purposefully searched the Internet for voice cloning services, bought a subscription to one of them, and uploaded an audio file with a recording of a real conversation with the director to create a convincing fake. The court found Darien guilty of disrupting the work of the school, and he was sentenced to four months in prison.

Judges and lawyers are noting the increasing use of audio and video deepfakes to compromise the opposing party in court, especially when it comes to personal and family relationships.

Melissa Sims from Pennsylvania (USA) spent two days in custody after providing the court with fabricated evidence against her. The reason for the arrest was a fake correspondence created with the help of artificial intelligence in 2024. The incident began when Melissa called the police to her home because of her boyfriend’s aggressive behavior. However, before the law enforcement officers managed to arrive, the man inflicted injuries on himself and told the law enforcement officers who arrived that it was Melissa who did it. As a result, the girl was arrested on charges of assault, and a protective order was issued against her. In the future, the man created fake screenshots of correspondence, where on behalf of Melissa contained insults to him. Based on this «evidence», the court remanded the girl in custody again for allegedly violating a court order and writing insults to her ex. Only two days later, Melissa was able to get out on bail, and in order to prove her innocence, she spent another eight months on legal arguments.

«The court did not think to verify the authenticity of the messages and the sender’s account»

Fabricated evidence in court

During a hearing in the New York Court of Appeals in the spring of 2025, the judges found that the person representing the plaintiff’s arguments in the video is an artificial avatar generated using neural networks. The judge interrupted the video showing, expressing dissatisfaction with the fact that the court was not warned about this in advance. The plaintiff apologized, explaining that he did not intend to mislead the court, but only sought to » more clearly state his position.»

This incident was another example of the problematic introduction of artificial intelligence in legal practice. Previously, lawyers have already faced sanctions for using AI that «made up» non-existent court precedents.

Screenshot of a live broadcast from the courtroom, where an artificial intelligence avatar (bottom right) addresses the judges on behalf of the plaintiff.

Fake porn with a face swap

This is one of the most widespread and malicious ways to steal and use someone else’s audiovisual image in bad faith. Scammers or detractors use deepfake technologies deepfake to «transfer» the face of the victim (most often women) to the body of the actor in pornographic videos. The goals are blackmail, extortion, humiliation, and revenge.

Teacher Helen Mort from Sheffield (UK) accidentally found out that deepfakes with her face were used to create explicit and often violent scenes of a sexual nature. According to the woman, this discovery caused her extreme stress, panic attacks and a sense of deep shame. However, the police could not help, as creating deepfake porn at that time was not a crime in England and Wales. Although the fake images were eventually removed from the site, their author remained anonymous and unpunished.

This case is part of a series of incidents around the world.

Similarly, in early 2024, at a Beverly Hills High School (USA), students used artificial intelligence to create fake nude images of their classmates, superimposing their faces on other people’s bodies.

Despite the existence of laws in the United States that allow victims of involuntary deepfakes to sue the creators of materials, their practical application is fraught with difficulties. According to human rights activists, it is difficult for victims to identify the perpetrators and seek justice, and the images themselves can circulate endlessly on the Web. In addition, the legal status of such materials often remains ambiguous, since not every nude imitation is considered pornographic from a legal point of view.

In this context, the case of the Grok neural network from X is indicative, when restrictions on the creation of «undressed» images with the participation of children, among others, were introduced not as the initial ethical setting of the platform, but as a direct reaction to the beginning of investigations by regulators. The situation with Grok is a vivid illustration of the legal lag of AI, where protecting users becomes a byproduct of commercial risk, rather than the main priority, as long as the main responsibility for protecting their digital image falls on the user himself.

Using well-known individuals for disinformation

Although the goal here is more often not personal revenge, but to create a convincing image to influence public opinion, the essence of theft and distortion of the audiovisual image is the same. Fake videos are created where public figures say or do things that didn’t happen. At the epicenter of such a scam was the owner of the disgraced AI assistant Elon Musk.

In the spring of 2025, scammers created a fake website to steal cryptocurrency. To convince their victims, they used Elon Musk’s deepfake video, where he allegedly distributes prizes on behalf of Tesla.

Phishing site used billionaire’s face to attract new victims

The examples above clearly show that the threat has ceased to be hypothetical and has become practical. Content generation technologies are developing exponentially, and the legal field and law enforcement practices are not keeping up with them. While regulators argue about classification, and courts sort out individual cases, fraudsters and malefactors are already using available tools against specific people.

However, you should not panic. There are specific and effective digital hygiene measures and preventive steps that can significantly reduce your risks. You can’t control the development of technology, but you can control your digital footprint and how you respond to potential threats.

Here’s a step-by-step guide on how to protect your voice, face, and reputation from theft and forgery using AI:

Deliberate placement of data on the web

Voice Protection:

  • Do not write down personal passwords, PIN codes, or addresses in audio messages (WhatsApp, Telegram, or voice memos). This record can be intercepted through application vulnerabilities, account hacking, or a virus on the device.
  • Limit public audio and video appearances. If you are not a public person, avoid posting long samples of your speech.
  • Avoid communicating in audio format where this is not necessary. Scammers can call you under false pretenses to make a voice recording and then use it.
  • Set up your social media privacy:

— check which apps have access to the microphone,

— carefully use voice services and commands for them

— make sure that your videos and live streams are only visible to your friends, not all your subscribers.

Face and images:

  • Minimize the number of high-quality photos in the public domain. Especially valuable for creating deepfakes are high-resolution photos with different angles and emotions.
  • Use the privacy settings for avatars, as well as albums with personal and family photos in social networks.
  • Be careful with biometric authentication. Do not pass questionable tests in social networks that upload your photo for «aging», «genetics analysis» and other similar entertainment. Remember: what gets on the Internet stays there forever.
  • Be careful about which apps you give access to your phone’s photo gallery. Perform a systematic audit of this parameter.

Technical protection and device settings

  • Install and use antivirus software on all devices.
  • Regularly update the operating system and applications on your smartphone and computer. Updates often fix vulnerabilities.
  • Use two-factor authentication (2FA) everywhere, especially for cloud storage (Google Photos, iCloud, Yandex. Disk), where your media files are stored.
  • Encrypt important local archives with photos and videos using specialized programs or built-in OS tools (BitLocker for Windows, FileVault for Mac).
  • Disable recording metadata (EXIF, geolocation) in photos / videos before publishing them online. To do this, you can use special applications.

How to detect an AI attack and what to do

Signs of a deepfake:

— Unnatural lip movements, blinking or facial expressions.

— Artefacts around hair, ears, glasses, teeth.

— The effect of «blurring» the video, perfectly smooth skin.

— Unnatural shadows and lighting on the face.

— Extra details on the video.

— Imperfect synchronization of voice and video.

— Too «perfect» or monotonous voice in an audio message asking for money.

— Uncharacteristic speech patterns for this person.

Algorithm of actions in case of suspected fraud:

1. DON’T PANIC.

2. Contact the person or organization you are being called on behalf of directly at the number you know or through a face-to-face meeting to confirm the information. Don’t use contacts from a suspicious message.

3. Ask a «secret question» that only you and the real person know the answer to (for example, about a personal event that is not available in social networks).

4. Do not transfer money or provide personal information under pressure («urgent!», «secret!», «otherwise it will be bad»).

Proactive measures and digital hygiene

  • Do a «digital cleaning»: search for yourself on Google/Yandex by first and last name. Delete or hide unnecessary photos / videos with your image.
  • Create a «digital watermark» for the trust. Arrange a code word or phrase that you will use in an emergency when calling for help in your family or close circle. A fraudster who generates a voice will not reproduce it.
  • Protect your photos with special tools. Services like Fawkes or PhotoGuard change the pixels in the photo unnoticed by the eye, so that AI algorithms cannot correctly «read» and copy your face.
  • Check the content. Use online deepfake detectors (for example, Deepware Scanner), but remember that their accuracy is not absolute.

Legal training

Of course, each country has its own legislative features, and it is important for a citizen to know the legislation of a particular country of their stay. We have selected a number of universal rules that will help you protect your image.

  • Keep the originals. Store your original photos and videos in high quality on a secure storage medium. In the event of a conflict, they can become a reference for expertise.
  • Know your rights. In many countries, the distribution of fakes that discredit honor and dignity, or their use for fraud, falls under the articles of criminal law (libel, fraud, unauthorized access to computer information).
  • Document it. If you are a victim of intruders, take screenshots, write down your phone numbers, and save your files. Please contact the law enforcement agencies with a statement.

checklist for each day:

  • I think before I post a photo / video.
  • I’ve checked social media privacy settings.
  • I’ve enabled two-factor authentication.
  • I’ ve discussed a code word for emergencies with loved ones.
  • I am skeptical of unexpected requests for money over the phone, even from a «recognised» voice.
  • I update the software on my devices.
  • I’m improving my fact-checking skills.

Remember: It is impossible to completely remove your digital footprint from the Network, but it is possible to significantly reduce the risks. The main weapon against scammers is your awareness and critical thinking.