Agent-based AI: New challenges for digital security

It sets his own goals, learns on the fly, and adapts to different algorithms. With the advent of agent-based AI, the fight against fake news is transforming from refuting individual, static fake news to confronting a highly organized, intelligent adversary that requires a fundamentally new approach to fact-checking.

Agent-based artificial intelligence (AI) is an advanced class of artificial intelligence systems that can act autonomously, set goals, make decisions, and plan actions in a changing environment.

Unlike traditional AI, which follows fixed algorithms and responds to a limited range of scenarios, agent-based AI is a form of intelligent agents that adapt, learn, and can interact with other agents and humans to create complex, multi-step processes.

Key functions of agent-based AI:

1. Perception — the system collects data from the outside world (via sensors, databases, the Internet);

 2. Thinking — the system analyzes the received data, evaluates the situation, forms hypotheses and strategic plans, and also learns from its own experience;

3. Action — the system takes specific actions to achieve the set goals.

Agent-based AI’s ability to learn and organize itself opens the door to solving complex real-time problems, from traffic management to financial analysis and medical diagnostics.

What is the difference between agent-based AI and generative AI?

Generative AI and agent-based AI are two important areas of artificial intelligence that play a key role in the development of the digital world, but they have fundamentally different functions and objectives.

Generative AI can be thought of as a creative artisan that creates new content — text, images, and music — based on the vast amounts of data it has been trained on. It “learns” from other people’s work and then independently generates similar but original works. Examples of such AI are ChatGPT, DALL·E, and MidJourney.

However, generative AI has an important limitation: it operates within the framework of inherent patterns and patterns, but does not realize the meaning of what it generates. It can surprise with his creativity, but he does not make independent decisions and depends on clear commands or hints.

Agent—based AI is a more “mature” and independent intelligence that not only creates, but acts and makes decisions with minimal human involvement.
It does not just execute commands, but manages the process himself — analyzes the situation, sets goals for himself and goes towards them, adjusting his actions along the way to achieve results. Examples are autonomous robots, drones, and smart assistants that can change delivery routes, interact with users, and perform complex multitasking operations.

It is noteworthy that these two types of AI can complement each other in a single system.
For example, agent-based AI can negotiate with a client, while generative AI helps them with charismatic responses. Or an autonomous robot cooks (agent AI) prepares a dish, and a generative AI comes up with unusual recipes.

The combination of these approaches promises new opportunities, but there is a fundamental obstacle on the way to them — the unpredictability of decisions. The main task has not yet been solved — how to guarantee control, security and establish responsibility for the actions of complex systems. This issue makes both the technological breakthrough and the development of ethical and legal norms equally important.

Agent-based AI and fakes

The potential of agent-based AI is inseparable from its threats. Due to its autonomy and wide reach, this technology can be turned to harm: to create steady streams of fakes, well-established fraudulent operations and large-scale cyber attacks.

Agent-based AI can spread false information in the following main ways:

1. Using hyperpersonalization: big data analysis and computational sociology allow you to segment your audience by political views, interests, fears, etc. Based on this data, agent-based AI have the ability to create extremely convincing and, therefore, more dangerous fakes.

2. Mass and continuous replication of false information, taking into account the audience’s reaction, which enhances the effect and often leads to the formation of “reality” based on artificially created beliefs (the concept of “performative prediction”).

3. The use of deepfakes and other types of deepfakes to create believable photos, videos, and audio that further enhance the credibility of the disseminated information.

4. Exploitation of social engineering, when AI agents collect users’ personal data and use it for more effective persuasion and manipulation.

5. Creation and managing fake profiles and social media accounts to simulate live communication and discussions, which increases trust and maximizes the reach of misinformation.

6. Creation of multiagent networks, where each agent performs specialized functions: some analyze trends in social networks, others create fake news, others generate fake photos and videos, and others manage a mass of fake accounts for wide distribution of content. At the same time, such a system can work almost autonomously and adapt false narratives to the target audience.

The emergence of coordinated agent communities represents a key trend in artificial intelligence development. This upcoming review will assess the associated opportunities and challenges.