EU Digital Front: Legislative Control of Information Flows Under the Guise of Combating Disinformation

I. Introduction
EU institutions rely less and less on voluntary compliance and increasingly rely on a system in which private companies, national regulators and supranational structures act as parts of a single mechanism. In this logic, the digital space is perceived not as an open platform for exchanging opinions, but as an infrastructure, each point of which is subject to verification, registration and, if necessary, restrictions. Officially, this is justified by ensuring the security and transparency of digital platforms. In fact, a stable administrative vertical is being formed, built into the technological basis of public life and difficult to access for direct political control by citizens.
August 2025 marked the moment when European digital policy moved beyond declarations and into practical action Media Freedom Act enshrined new rules for editorial work and requirements for transparency in media ownership, officially declaring its goal to protect the independence of journalists. European Accessibility Act et uniform standards for digital accessibility, obliging companies to adapt their services for people with disabilities. And the
AI Act has been put into practice for the first time, requiring developers to record the provenance of data for training AI-models and to comply with requirements for “high-risk” systems, demonstrating a new reality where regulations shape the very fabric of European digital communication.
Lawyer and GFCN expert from Portugal, Alexandre Guerreiro, highlighted another alarming feature of this law:
“We should not neglect the fact that the Media Freedom Act also carries with it the right of national authorities to arrest journalists based on «an overriding reason of public interest», which is evaluated on a «case-by-case basis» (article 4(c)). The Preamble of the law also mentions several times the need to address «disinformation or information manipulation» without specifically describe the scope such approach should have, which increases controversy as to whether and to what extent this Act could be used as an instrument to attack journalism in case the message of the media agent does not follow a specific view from the European Union leadership.”
All this is not only a step towards more stringent digital modernization, but also a turning point in the issue of distribution of power. Defining the boundaries of acceptable communication and access to information is leaving the sphere of political debate and becoming the subject of technical regulations created without broad public participation.
In the second part of our investigative series on information censorship in the EU, we look at how laws such as Digital Services Act – a law regulating the activities of the largest online platforms and search engines – and the upcoming Chat Control 2 – an initiative to mandatory scan personal messages on devices – are turning into tools of systemic surveillance and changing the very structure of European democracy.
II. The EU’s digital regulatory landscape by August 2025
By mid-2025, European digital legislation has taken shape as a multi-layered structure, where each new act is superimposed on the existing system of norms, creating a dense regulatory network. This system does not simply set the framework for the operation of online platforms, but gradually covers the entire cycle of information circulation – from its creation and distribution to storage and access to it.
At the center of this field remains the Digital Services Act. Once in full force, it became the main instrument for overseeing the largest online platforms and search services (European Commission, 2024). Its provisions oblige companies to provide researchers with access to internal data, disclose the logic behind their algorithms, and maintain public libraries of advertisements indicating their funding sources and target audiences. These measures are presented as a step towards greater transparency, but in practice they reinforce a mechanism of centralized control: the final word on who gets access to the data and under what conditions belongs to the European Commission. As a result, the circle of “approved” entities is often limited to accredited research centers and organizations closely linked to the EU institutional system.
For independent experts, civil initiatives or journalists, this model creates barriers: they may simply not be allowed access to the data sets needed for a full analysis. This means that control over the interpretation of key information – for example, how moderation algorithms work or how political advertising is distributed – remains in the hands of a limited number of actors.
For users, this means that access to alternative interpretations is reduced, meaning that critical analysis may be marginalised and public knowledge about the operation of digital infrastructures will depend on a narrow circle of ‘official’ intermediaries (EU Centre for Algorithmic Transparency, 2024).
The Digital Markets Act, aimed at limiting the market power of tech giants, completes this picture. The obligation to ensure interoperability of services, simplify data transfer and give the user a choice by default creates the appearance of expanding consumer rights. However, behind this lies the strengthening of the regulator’s influence on the internal technical processes of corporations and the dependence of their work on administrative decisions in Brussels.
August marked the launch of the first obligations under the AI Act (European Commission, 2025) Developers of general-purpose systems are now required to record the provenance of training data, assess potential risks, and notify the implementation of technologies classified as high-risk (Veale&ZuiderveenBorgesius, 2021). These categories include, for example, facial recognition and biometric identification systems, algorithms for assessing creditworthiness or selecting employees, as well as technologies used in education and healthcare. These areas are recognized as high-risk, since basic human rights, access to social services and protection from discrimination directly depend on the decisions of algorithms.
The new requirements form a single European standard, but at the same time create a significant gap between large corporations that can withstand the costs of compliance and small developers. For the latter, compliance with the standards becomes economically burdensome: it is necessary not only to conduct regular audits and hire specialists in AI compliance and ethics, but also to implement monitoring systems, maintain documentation for each stage of development, and pay for certification procedures. In some cases, additional technical resources are also needed – from infrastructure for storing and labeling data to cybersecurity tools. As a result, the cost of compliance can be measured in hundreds of thousands of euros per year, which effectively pushes startups out of the market and increases concentration in the hands of a few transnational players.
The MediaFreedomAct, which came into force, enshrined new conditions for editorial work and requirements for the disclosure of media ownership structures (European Parliament, 2025).Officially, the law is aimed at protecting the independence of journalists, but it introduces procedures that give the regulator more opportunities to assess the content of editorial policy.
The Digital Fairness Act under discussion is preparing a ban on manipulative user interfaces, so-called “dark patterns.” These are design elements that intentionally mislead the user or push them to take actions that benefit the company: for example, a hidden unsubscribe button, automatic addition of paid options to the shopping cart, or an overly complicated cancellation procedure (European Commission, Digital Fairness Fitness Check, 2025). The bill also obliges companies to disclose their pricing methods in e-commerce. And although the document is still at the consultation stage, it is already clear that it will become another link in the chain of laws that build comprehensive control over the digital environment. But more on this later.
Taken together, these regulations create a structure where regulation ceases to be a reaction to individual abuses, becoming a permanent tool for managing information flows. At the same time, it is formally aimed at protecting the rights and interests of citizens, protecting against disinformation, but in fact it strengthens the mechanisms that allow influencing the content and form of digital communication at the infrastructure level.
GFCN expert Alexandre Guerreiro underscored a non-trivial legal aspect of these legislative acts:
“It is interesting that the EU decided for a regulation rather than a directive: while directives allow States to decide the scope of application of some norms and gives them a deadline to adjust, regulations have direct effect in the legal framework of States and are not open for changes or adjustments through domestic legislation.”
III. Transparency and accountability instruments
The European Union is building a system in which large digital platforms are gradually losing their traditional corporate autonomy and are forced to open their internal mechanisms to external control. The Digital Services Act introduces the institution of ‘vetted researchers’, who, under Article 40, are granted access to data from VeryLargeOnlinePlatforms (VLOPs) and VeryLargeOnlineSearchEngines (VLOSEs)1. These are universities, independent research centres and experts accredited by the European Commission or national regulators. They are given access to information about the operation of algorithms, advertising databases, content distribution and risks to democracy, but companies have the right to limit disclosure, citing commercial secrets or privacy protection. In addition, access is centralized through a single portal of the Commission, which creates a risk of selective admission and may affect the quality of research and independent fact-checking (EU Centre for Algorithmic Transparency, 2025).
The authorisation process itself and the extent of effective access remain a matter of debate between regulators and industry. For example, researchers from the European Digital Media Observatory network in 2025 pointed to the lack of transparency in accreditation criteria and the restrictive nature of access, which they believe hinders independent analysis and reduces trust in scientific results (European Parliament, 2025). The range of accredited participants is generally limited to structures already integrated into the EU institutional system (DSAObservatory 2025).
Advertising transparency is enshrined in the obligation to label political and socially significant ads with the customer, source of funding, and target audiences. To do this, platforms must maintain open ad libraries that are available for search and analysis. However, in some cases, these databases are formed with incomplete information, which makes independent verification difficult. An example is the case with TikTok: the European Commission preliminarily found a violation of the DSA due to the lack of a fully functioning advertising database that prevents the detection of fraudulent and political advertising on the eve of elections (APNews 2025).
Special attention is paid to personalization algorithms. Users have the right to know what data is used to form their feeds and search results, as well as the ability to choose content display modes without algorithmic adjustments.
Platforms are required to publish descriptions of how their recommendation systems work and provide statistics on their impact on content distribution. However, the methodologies for evaluating the algorithms often remain closed, making transparency only partial. As noted on the official website of the European Commission, the DSA obliges platform to ensure transparency of personalized feeds and provide users with the option to disable personalization (European Commission Digital Strategy EU 2025).
At first glance, the DSA’s requirement for transparency in personalized feeds and the ability to turn off algorithmic adjustments seems like a step in the interests of users. However, in practice, several problems arise. First, it is mostly a formal transparency: platforms publish general descriptions of how their algorithms work, but do not disclose specific methodologies, weights, or ranking logic. This means that external verification and independent audit remain limited.
Secondly, even with “personalization turned off,” users only get an alternative display mode — for example, a feed based on the time of publication — which does not cancel the structural dependence on the platform as an information intermediary.
Third, there remains the risk of selective interpretation of the requirements: companies may disclose only the part of the data that is convenient for compliance with the letter of the law, but does not provide a real understanding of algorithmic priorities. As a result, such “transparency” may work more as a tool for strengthening the legitimacy of platforms and European regulators than as a means of genuine public oversight.
A good example is provided by Meta (recognized as an extremist organization in Russia). After the first DSA requirements came into force, the company opened the so-called Transparency Center to researchers and announced an “unprecedented level of access” to its algorithms. In practice, however, this was a limited interface, where only aggregated data was available, which did not allow checking the actual mechanisms of post ranking or advertising targeting.
Moreover, in the context of the 2024 European Parliament elections, Meta (recognized as an extremist organization in Russia) effectively controlled which data sets would be disclosed and which would remain closed under the pretext of “commercial secrets.” This led to a situation where research groups involved in fact-checking and analyzing political advertising received only partial access and were unable to fully assess the extent to which the algorithms reinforced certain narratives (Rau et al., 2025).
The new DSA Delegated Act establishes a single portal for researchers to submit requests and sets out a list of data that must be disclosed. This step can be seen as institutionalising access, but also as centralising it under the control of the European Commission. In a politically competitive environment, this creates the risk of selective access to critical data sets (European Commission, 2025).
This applies, for example, to data sets on the distribution of political advertising, algorithmic recommendations, and content moderation during elections. If such data is provided only
to certain “trusted” research centers, there is a risk of knowledge monopolization: access
to materials that allow for the analysis of manipulations or hidden influence campaigns will
be limited. This circumstance is especially sensitive for fact-checking, because without equal access to full data sets, independent researchers and NGOs will not be able to verify the scale and sources of disinformation under the same conditions as structures close to the European Commission. As a result, critical expertise risks being crowded out by the institutionally sanctioned and “correct” version, which reduces trust in the fact-checking system itself.
GFCN expert, lawyer from Slovakia Tomáš Špaček expressed concern that with this approach, Europeans will be divided into the “correct” and the rest:
“Given the very limited possibilities of independent researchers and journalists to defend themselves against the injustice resulting from this legal act, there is a risk that they will be forced to rely on secondary sources for their analyses without access to raw data, which may lead not only to a decrease in the quality of their outputs, but also to their loss of interest in this important digital area.”
In the area of consumer protection, the EU is proposing to ban so-called “dark patterns” – special design elements of websites and apps that encourage people to spend extra money or agree to undesirable terms. These measures are being discussed within the framework of the Digital Markets Act, which also includes rules on greater price transparency in online trade.
At first glance, such an initiative seems entirely positive: users will be protected from manipulation, and online platforms will be obliged to disclose the real cost of goods and services. But there is a risk. The more strictly the European Commission regulates the user interface, the less space there is for the freedom of entrepreneurial models. For example, if any attempt to stimulate the buyer – discounts with a limited validity period, dynamic pricing depending on demand – is classified as “unfair practice”, this will effectively paralyze the creative mechanisms of e-commerce.
It turns out that under the slogan of consumer protection, another layer of bureaucratic control is being formed: the regulator becomes the arbiter of what is considered an acceptable way of doing business and what is a “dark pattern”. This increases the dependence of European companies on Brussels and at the same time increases the competitive advantages of global corporations that have the resources to adapt to any new rules.
Thus, under the pretext of transparency, a new digital accountability regime is being created, in which the platform is accountable not only to the user, but also to the external surveillance system. The question remains whether the balance will be maintained when the EU moves from open access requirements to preventive control of all digital traffic.
Director of the Spanish Institute of Geopolitics, international analyst and GFCN expert Juan Antonio Aguilar agrees with this conclusion:
“This entire legal superstructure is nothing more than a construct for the absolute control of the narrative that should influence citizens’ political decisions. No matter the reasons given, they are nothing more than excuses. This is especially the case with the so-called Digital Services Act, which, as the author points out, is a mechanism of centralized control, where the final say on who accesses data and under what conditions rests with the European Commission, a body not directly elected by European citizens and which obeys the hidden interests of dominant corporations.”
IV. Coercive measures and interference with private communications
The EU digital policy is entering a phase where regulation is moving beyond the public sphere and into the sphere of individual communication. The most illustrative example of this trend is the Chat Control 2 project, which is in the final stage of discussion by August 2025 and is expected to be adopted in the autumn (Euronews Green 2025, TechRadar 2025). This draft law would require all instant messengers, email services and cloud storage to implement client-side scanning technology that checks personal messages, photos and files before they are sent or uploaded to the server (The Guardian 2025). The official motivation is to combat the spread of child sexual abuse materials and prevent terrorist threats (Security Boulevard 2025). However, the document’s structure and technological implementation demonstrate that such measures create a legal precedent for the permanent, automated and preventive monitoring of all private correspondence, regardless of whether there is a suspicion of wrongdoing (Cointelegraph 2025).
In a context where the EU already has a wide arsenal of surveillance tools through Europol, including data exchange systems and cross-border access to electronic data projects, the introduction of user-side scanning essentially turns personal devices into nodes in a state surveillance system (SIRIUS project– Cross-Border Access To Electronic Evidence). This project aims to support law enforcement agencies in obtaining digital evidence from service providers around the world, facilitating cross-border access to data in criminal investigations. Together with Chat Control 2, this represents a qualitative shift: while previously access to private information required legal grounds and interstate requests, now the communications architecture itself is integrated into a system of preventive control, where each user is considered a potential target of surveillance.
Experience with similar technologies in other jurisdictions shows their tendency to interpret tasks in a variable way. In Australia and the UK, attempts to implement client-sidescanning were accompanied by an increase in the number of false positives and technical vulnerabilities, which led to illegal access to personal information and data leaks (The Guardian 2025). Critics from the academic community and human rights organisations point out that Chat Control 2 in its current form violates the core provisions of the EU Charter of Fundamental Rights, in particular the articles on the right to privacy and the protection of personal data (Euronews Green 2025). Of particular concern is that the infrastructure for scanning messages is concentrated in a limited number of European Commission contractors.
As a result, the checking of private correspondence is effectively outsourced to companies that are often under jurisdiction outside the EU. This means that the issue of privacy ceases to be an internal matter for Europe and becomes an area of external dependence. The more control is extended beyond the boundaries of political sovereignty, the less real guarantees citizens have that their digital lives are protected not in words but in deeds (Tech Radar 2025). Simply put, such an architecture increases the vulnerability of the European communications environment to external influence, including potential access to data sets by states in intelligence-sharing alliances such as the FiveEyes alliance (Wired 2025).
When viewed in the context of a broader regulatory strategy, Chat Control 2 is part of a shift towards preventive censorship and control, shifting the balance between citizen rights and security interests in favour of the latter (Security Boulevard 2025).
This transformation of the digital space requires special attention not only from cybersecurity specialists but also from policy researchers, as it inevitably changes the very structure of public and private communication in Europe.
V. Intersections of Law and Human Rights
Any digital legislation of the European Union is formally obliged to comply with the fundamental norms enshrined in the European Convention on Human Rights. However, the practice of recent years shows that in the context of increasing political pressure and the technological race, this link is increasingly becoming a ritual formality rather than a truly effective constraint for the legislator.
In the case of Chat Control 2, the main legal node is centered around article 8 of the European Convention on Human Rights, which guarantees the right to respect for private and family life, including the privacy of correspondence. Mandatory scanning of personal messages on the device before their encryption effectively creates a continuous preventive surveillance regime that covers each user, regardless of the presence of suspicions or grounds for intervention (European Commission, COM (2022) 209 final).
In legal terms, this can be compared to installing wiretaps on all telephone lines in the country – only now all digital traffic is under control.
The European Court of Human Rights’ traditional argument about proportionality here turns into a systemic contradiction. The goal of protecting children from sexual violence is one of the state’s absolutely legitimate tasks, but a measure covering the entire population creates the risk of being recognized as excessive and disproportionate, especially if alternative methods – from targeted investigations to the work of specialized units — have not been exhausted.
Similar legal conflicts are already visible in other initiatives. Digital Services Act, expanding the possibilities of content monitoring, it simultaneously introduces mechanisms for appealing and restoring deleted information, designed to formally maintain a balance. But in practice, this balance comes down to lengthy procedures available only to a limited number of users who have the time, knowledge, and resources to go through them.
The reason is that the appeal process requires a detailed understanding of legal norms, collecting evidence, interacting with the internal moderation systems of platforms, and, often, engaging legal assistance. At the same time, the timeframes for considering complaints are extended, and the interfaces for filing applications are often complex and fragmented across different channels. As a result, most users, especially from vulnerable groups, prefer to abandon attempts to restore content, while large organizations or political actors with in-house lawyers use these mechanisms systematically and in their own interests.
Of particular concern is that the architecture of European digital regulation increasingly entrenches a model of permanent intervention in the communications environment. Even under the pretext of transparency and accessibility, centralized databases and mandatory channels for transmitting information to state and supranational structures are being created. This is gradually changing the very nature of the relationship between the citizen and the digital space: the user ceases to be an active subject determining the scope of his or her participation and is transformed into an object of regulated surveillance.
GFCN expert, lawyer from Slovakia Tomáš Špaček expressed his opinion regarding the situation in the European Union:
“The problem with the European Union is that it is increasingly moving away from the values for which it was founded and is becoming a political union. So, instead of dealing with the declining economy and the living standards of its citizens, European leaders are developing steps to suppress dissenting voices. Unfortunately, one of the latest forms of this curtailment of freedom is this Digital Services Act, hiding behind noble goals, but in reality representing a serious threat to freedom of speech and pluralism of opinions.”
VI. Geopolitical Pressures and Further Challenges
European digital policy has long ceased to be a purely internal matter for the Union. Every new rule, especially in the area of platform regulation and encryption, instantly becomes the subject of international negotiations, diplomatic lobbying and transnational technological conflicts.
In August 2025, the United States openly increased pressure on Brussels over the Digital Services Act and the upcoming AI Act. American diplomats were instructed by the Senate to step up their work with EU governments, convincing them that the restrictions envisaged by these acts harm freedom of speech and the competitiveness of American technologies (Reuters 2025). In practice, this lobbying is aimed at softening the requirements for transparency of algorithms and limiting the liability of large platforms for content (Politico 2025).
At the same time, tensions are growing with the UK and Australia, where their own initiatives to weaken encryption are being discussed. Although officially presented as measures to protect children, in the tech communities they are seen as part of a general Western trend to establish control over encrypted communications. This trend is reinforced by the mutual exchange of policy formulations and ready-made regulatory models between the EU, FiveEyes and a number of Asian partners (Reuters 2025).
For the EU, this creates double pressure. On the one hand, domestic discourse requires maintaining the image of a defender of rights and freedoms, distinct from the more rigid approaches of its allies. On the other hand, real technological dependence on American and British corporations makes Brussels vulnerable to political concessions. A separate risk factor is the emergence of initiatives such as the Protect EU project, aimed at creating the technical capability to decrypt any digital data stored within the EU (TechRadar 2025) by 2030.
Although the project is currently presented as a research and coordination project, its ultimate goal is to form a common infrastructure for decryption, which could change the very nature of digital security in Europe (Tech Radar Experts Letter 2025).
The EU finds itself in a situation where new digital regulations perform two functions at once: they serve as an instrument of domestic policy and, at the same time, become an element of global competition for control over information flows. In this configuration, the priority is increasingly shifting from the protection of human rights and privacy to strategic and technological interests determined by the balance of power in the international arena.
VII. Conclusion
By the end of 2025, the digital policy of the European Union is increasingly perceived as a system of guarantees of rights and freedoms and is gradually taking the form of a managed space in which technological infrastructure performs the functions of political and cultural regulation. Under the rhetoric of “user protection”, a regulated digital order is being formed, where the boundaries of acceptable behavior are determined not by society, but by supranational structures and large corporate players whose interests are embedded in the regulatory process.
Initiatives like the Digital Services Act or the upcoming Chat Control 2 are presented as necessary measures against disinformation and online crime. In practice, they expand the scope of state and supranational intervention to levels that might be considered excessive in a different political environment. Mandatory scanning of private correspondence, the creation of centralized data storage, the institutionalization of channels for transmitting information to authorities — all of this changes the very concept of privacy, turning it into a conditional and increasingly vague term.
This dynamic is dangerous not only for individual rights, but also for the sustainability of democratic procedures. It strengthens the power of the centralized bureaucracy, weakening the role of local communities and nation states in determining cultural and political priorities and complicates the fact-checking process. In this logic, digital security increasingly becomes synonymous with digital surveillance, and the protection of democracy is a form of managed control over the public sphere.
If this trend continues, the EU risks ending up as a model of digital authoritarianism, where control is built into every device and transaction and does not require direct coercion. Whether European societies have the institutional maturity and political will to set limits on interference will determine whether the digital space remains part of the democratic order or ultimately becomes a managed infrastructure under the banner of security and technological progress.
__________________
- 1. Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs) are categories introduced in the Digital Services Act for the largest digital services with an audience of more than 45 million users in the EU (about 10% of the population). These include social networks, video hosting and marketplaces (Facebook*, Instagram*, YouTube, TikTok, Amazon) and search engines (Google*, Bing). These players are required to comply with enhanced requirements for the transparency of algorithms, content moderation, user protection and the provision of data for research, since their influence on social processes is recognized as systemic. ↩︎
* Banned in the Russian Federation.