Fake AI Communities: a new generation of mass influence tool
Imagine a digital anthill where each virtual «ant» is an AI agent with its own task. They are not dangerous alone, but together they build complex structures of influence. Today, we are witnessing how single bots are evolving into such coordinated «colonies» capable of simulating reality with exceptional reliability.
As we found out in the previous article, AI agents are able to plan their actions several steps ahead, promptly take into account changes in the context and coordinate efforts with other agents to solve problems together. These particular abilities formed the basis of a new phenomenon — fake AI communities. At their core, these are complex multi-agent systems where autonomous AI agents lead discussions and mimic experts. Their task is to create a complete illusion of lively and competent interaction for the observer. In such communities, agents:
1. Generate plausible texts and messages, imitating the style and knowledge of real experts.
2. Create and maintain fake profiles with a history of activity and a unique manner of communication.
3. They conduct dialogues among themselves, reinforcing and reinforcing certain points of view.
4. Use deepfakes of audio and video to enhance authenticity.
5. Analyze trends in social networks and adapt the content to the reactions of real users.
6. Fake content is massively distributed through fake accounts and groups, creating the effect of public opinion.
7. Apply social engineering techniques to personalize and enhance the impact on the audience.
The potential of such communities indicates that in the hands of scammers, this technology can combine phishing and scam bots in social networks and instant messengers into a system, which, disguised as real people, can conduct a variety of dialogues, collect money and personal data, and in some cases use deepfakes to organize video calls with «live» virtual interlocutors.
The field of charity is becoming one of the notable potential targets for scammers using autonomous AI systems to raise funds under the guise of good deeds. Neural networks are already able to create emotionally moving and compelling content — stories, videos, photographs — that inspire empathy and trust among a wide audience. And thanks to the ability of agent-based AI to fine-tune content for each user, such materials can potentially deceive even attentive users.
The Sage Future experiment
The American non-profit organization Sage Future conducted an experiment in which four AI agents — OpenAI GPT-4o and o1, as well as two AI from Anthropic Claude (versions 3.6 and 3.7 Sonnet) — were placed in a virtual environment to independently raise funds for charity.
These agents were given the freedom to choose which organization to support and how best to draw attention to their campaign. During the experiment, AI agents investigated charities, created and edited documents, coordinated with each other, sent emails through pre-configured email accounts, and even created an account on the social network X to promote the campaign.
At the same time, the agents were not completely autonomous: their actions were controlled and directed by people, telling them which sites to pay attention to or what steps to take. Most of the donations came from these viewers. In the process, AI agents encountered difficulties — sometimes they were distracted, hung up, or couldn’t pass CAPTCHA checks without human help.
Adam Binksmith, CEO of Sage Future, noted that the experiment fully demonstrated the potential and speed of development of AI agents, so the company is already planning to develop more advanced monitoring and security systems to control their work.
Thus, the further development of AI agents directly depends on a responsible approach to their creation and implementation, where the priority should be to protect users and ensure trust in the digital environment. On the one hand, it is a deep automation of tasks that benefits key sectors of the economy and society. On the other hand, the formation of a new operational space for disinformation and fraud due to the ability of agents to communicate and create content. That is why combating these threats requires an integrated approach combining technology, legislation and education. The more we understand the nature of threats and improve our tools for detecting and neutralizing them, the more effectively we can use AI in the public interest and minimize the risks of abuse.