The Influence of Bots ("Electronic Flies") on Shaping User Opinions

Social media platforms are invisible battlegrounds for bot accounts (also called  “dhoubab elektroni” in Arabic, or “electronic flies” in English) that launch online campaigns to sow discord, create confusion, and achieve their own vested interests, whether by spreading extremist ideologies, malicious rumors, fake news, or by engaging in aggression, incitement, distortion, or attempts to destabilize the security and stability of countries and societies by creating social divisions or creating conflicts between several parties for the benefit of the party or country behind the “flies”.

How Do Bot Accounts Operate?

Bot accounts are fake social media profiles operating on a meticulous methodology. These accounts are run by fake users to influence and mislead public opinion about fake news in a particular area or topic. They use fake slogans and phrases, create hashtags, and repost them on millions of pages.

Between the quantity amount of contradictory information, the recipient finds himself prey to deliberate misinformation or a victim of clever and systematic polarization. Analyses of the fabricated posts or images sent by the “flies”, who are skilled at creating entertaining or serious videos, are conflicting and used to manipulate the minds of recipients, influencing their awareness and directing their vision. They are presented to a tired, emotional, and charged audience, who believe without knowing the source. However, bots can also distribute entertainment content by promoting an author’s channel or page.

The content streaming on our phones is becoming an increasing threat. It is a strategic ideological weapon that is more dangerous than military weapons. It is carefully studied and carefully directed to mobilize the target group psychologically and intellectually, bypassing reason and focusing on emotion. It’s a psychological war waged by conflicting parties, using bot accounts to spread propaganda, misleading information, and exclusive leaks that side with one side and paint a dire picture of the opposing side. In a matter of seconds, they can become a trending topic on social media.

Countries hostile use artificial intelligence social bots to generate written and visual content as an effective means of spreading fear, exaggerating events, falsifying audio, and fabricating images and videos.

They distribute this content through suspicious news websites, where they leave comments under various articles to insult, for example, Russia, cast doubt on its government decisions and social security, and distort the facts by condemning its defense of its territory. Mobilizing and misleading global public opinion through electronic media, which is consumed by bot accounts and republished, makes the media a “soft” tool for building an internal opposition with goals that serve the agenda of external parties.

The media war is not just news and information that can be ignored, but rather it is social engineering of consciousness and the booby-trapping of minds.

The objectives behind these actions can vary. Among them — undermining trust and destabilizing the image of Russia or any other country in the eyes of its people and the international community. However, a commercial motive should not be ruled out either. Such activities could be used in attempts to harm business competitors, among other purposes.

The mechanisms of influence vary according to the digital war zones and their users. The X platform is used by opinion leaders and elites to express their thoughts and shape public opinion, while the TikTok and Instagram platforms are used by young people from all segments of society.

In my opinion, they are the most popular and most dangerous due to the lack of qualitative awareness and sound analysis. They directly influence young recipients in a matter of seconds, making them the preferred target for bot accounts that aim to distort people or to mislead public opinion by transmitting fake news and turning it into facts that people discuss, thus changing the public opinion.

How to Identify a Bot?

Distinguishing a bot from a real internet user requires careful analysis based on several indicators. Below are key criteria to evaluate suspicious online profiles:

1. Behavioral Signs

  • High activity. Bots often post dozens of messages per hour since they don’t need breaks for sleep or rest. Real users typically don’t behave this way, as they have jobs, personal lives, and other commitments.
  • Repetitive actions & templated content. If an account repeatedly posts the same phrases (e.g., “The government is evil!”) or slightly reworded versions of the same message, this is suspicious. Genuine users express themselves more diversely. Also, be wary of accounts that overuse specific keywords or hashtags.
  • No dialogue, only monologue. Bots rarely engage in meaningful discussions. They either ignore follow-up questions, respond incoherently, or avoid private messages altogether.

2. Technical Indicators   

  • Empty or “dead” profile. Lack of a profile photo, generic/default avatars, randomly generated usernames, and few (if any) mutual friends despite a high follower count suggest the account was created for ulterior motives.
  • Account creation date. If a recently registered account is already aggressively discussing politics or posting scandalous content, this is a red flag.
  • Geolocation & language. Bots may reveal themselves through unnatural phrasing, odd sentence structures, or errors suggesting machine translation — even if the account claims to belong to a native speaker.

3. Content Analysis   

  • Post history. Real users typically discuss various topics. If an account posts only about politics, scandals, or ads c or consists solely of rapid reposts — it’s likely inauthentic.
  • Lack of nuance & stylistic quirks. Bots often rely on AI-generated templates, making their content feel rigid and repetitive. Humans, by contrast, express thoughts more dynamically, with individual writing styles (e.g., unique emoji use, punctuation habits, slang, or even typos.
  • Emotional manipulation. Since bots aim to provoke reactions, they frequently use extreme, aggressive, or overly euphoric language without nuance. Avoid impulsive engagement—this may be precisely their goal.

How to Counter Bots

Countering bot accounts requires relying on reliable sources, using logical and cognitive analysis, avoiding interaction with fake accounts, and blocking them. It also requires educating the youth and training young specialists to combat digital disinformation, as war today is not only fought with weapons, but also with words and images.

Bots are getting smarter, but they can still be detected by their unnatural behavior, repetitive patterns, and technical anomalies. The best approach is to combine human intuition, critical thinking, and technical tools.

  • Fact-checking messages

Verify sources and cross-check facts. Analyze suspicious content using specialized resources and technical tools (TinEye, Who.is, Botometer, etc.). Always trace a news story back to its original source—if none exists, that’s a major red flag. 

  • Run a “humanity test”

Ask the suspicious account an unexpected question or request a detailed discussion — bots often falter or provide generic responses since they lack nuanced knowledge or genuine “opinions” on complex topics.

  • Assess emotional manipulation

Are the messages designed to scare or anger you? Consider who might benefit from such reactions and how.

  • Report spam activity

Use built-in reporting features on social media and news sites — flag suspicious accounts to platform moderators.

The media must also be strengthened to defend Russia’s borders, develop its digital tools for fact-checking, like the Global Fact-Checking Network (GFCN), and unite experts in this field to deconstruct propaganda, uncover its sources, analyze its motivations, and then produce articles that reveal the facts.

The material reflects the author’s personal position, which may not coincide with the opinion of the editorial board.