Democratic Deficit in Digital Governance: From Rights Protection to Administrative Concentration of Power — II

Part III of a series of investigations into cognitive policy in the European Union.
IV. Social and Cultural Consequences: Transformation of the Public Sphere (Examples)
European digital regulation is no longer an abstract legal construct — it has become materialized in the everyday experience of millions of citizens. Every interaction with the digital space is now mediated by an architecture of filters, algorithms, and procedures that shape not only what we see, but also how we understand the limits of what is possible in public communication. This system, designed to ensure security, is generating unintended social effects that are changing the very nature of the European public sphere.
Algorithmic moderation and narrowing of discursive space
Since the Digital Services Act came into full effect in February 2024, European users have seen a sharp increase in content moderation. The largest platforms (Very Large Online Platforms, VLOPs) were required to conduct annual systemic risk assessments and implement appropriate protective measures. In practice, this has led to preventive content filtering through a combination of machine learning and human review, which has created the phenomenon of “algorithmic reinsurance”: platforms, fearing fines of up to 6% of global turnover, prefer to remove controversial content in advance.
The scale of moderation interventions is impressive: according to a study by the DSA Transparency Database, more than 195 million moderation decisions were recorded from eight major social platforms in the first 60 days of the reporting system. However, the statistics do not reflect the qualitative changes in the nature of digital communication. A telling example was in 2020, when Facebook* removed a Guardian article featuring a historic 19th-century photograph of Australian Aboriginals in chains for “violating nudity rules” (Reclaim The Net, 2020). What seemed like a technical glitch at the time has now become a systemic problem of algorithmic hypersensitivity to context.
Journalists and independent media have proven particularly vulnerable. The European Federation of Journalists has documented systematic cases of journalistic materials being removed without prior notice. For large editorial offices with legal departments, this creates additional costs, but for small independent projects it is an existential threat. As the federation notes, “the very essence of news is time”: material blocked for days or weeks by appeal procedures loses its public significance.
The result of such encroachments is the phenomenon of preemptive self-censorship: content creators begin to internalize algorithmic restrictions already at the production stage. This is not classic censorship with clearly formulated prohibitions. Rather, it is a blurred zone of uncertainty with boundaries of what is permissible that are constantly revised by invisible algorithmic arbiters.
Linguistic Inequality in Digital Moderation
An analysis of the first platform transparency reports published in November 2023 revealed critical inequalities in the language coverage of moderation teams. Global Witness and the Platform Governance, Media and Technology research lab found that platforms X and Snapchat do not have a single moderator who speaks Estonian, Greek, Hungarian, Irish, Lithuanian, Maltese, Slovak or Slovenian. LinkedIn reports having moderators for only 7 of the 24 official EU languages.
This “linguistic blindness” creates a two-tiered system of digital citizenship. Users from smaller EU countries find themselves in a situation where their content is moderated by algorithms that do not understand the nuances of language, or by people who rely on machine translation. Lithuanian irony can be recognized as hate speech, Slovenian poetry as potentially harmful content. Simply put, structural inequality is built into the architecture of digital Europe.
From Compliance to Capitulation: The Erosion of User Autonomy
The phenomenon of “consent fatigue” has become one of the unintended consequences of the GDPR. A study by the Journal of Cybersecurity (2020) found that, instead of expanding user rights, the regulation has led to even more people automatically accepting all permissions in an effort to get rid of annoying notifications.
By 2025, the situation had worsened. According to CookieYes, 87% of American users state they want a simple way to reject cookies, but in practice, about 50% automatically accept all permissions due to the complexity of the interfaces. An “administrative click culture” is emerging as a new type of digital subjectivity, based on passive default consent. Citizens are growing accustomed to constant demands for their will, but in a form that makes meaningful choice practically impossible.
Manipulative design solutions (dark patterns), where the “Accept all” button is highlighted more brightly than the “Reject” button, remain widespread despite recommendations from the European Data Protection Board against such practices.
Transforming Trust: From Openness to Managed Security
The EU’s digital regulation fundamentally redefines the relationship between citizens, platforms and the state. The traditional model of the Internet as a space for relatively free exchange of information is being replaced by a model of “managed security”. In this integrated model, every interaction is filtered by risk assessments.
Section 34 of the DSA requires platforms to identify and assess “systemic risks,” but the concept itself remains vague, leaving much room for interpretation. Platforms tend to err on the side of caution, removing or restricting content at the slightest doubt. This creates an atmosphere of latent suspicion, as users begin to perceive the digital space not as a platform for the free exchange of ideas, but as an area of heightened control.
Media regulation: protection through restriction
The European Media Freedom Act (EMFA), which came into full force in August 2025, created an additional regulatory layer for media. Article 17 grants media organisations a special status in content moderation, requiring large platforms to engage in dialogue before removing journalistic material.
However, in practice, this creates new challenges. To receive EMFA protection, media must prove “editorial independence” and compliance with “professional standards”, which becomes a form of soft coercion towards certain models of journalism. Small and independent media, which do not have the resources to navigate the complex regulatory landscape, are often excluded from the protection mechanisms. It is highly controversial for neoliberal democracies that regulation intended to protect media freedom can contribute to the concentration of the media market in the hands of large players.
The drive for uniform content moderation standards across the EU-27 inevitably leads to cultural homogenisation. Algorithms trained on aggregated data are unable to consider the subtleties of local contexts — what is acceptable satire in one country may be considered hate speech in another.
The result of these efforts has been an erosion of cultural diversity in the digital space. Content is becoming more sterile, avoiding potentially controversial topics that affect the pluralism of speech and opinion. This is particularly noticeable in political discourse, where heated debate and public discussion — a necessary element of the democratic process — are increasingly suppressed in the name of “safety” and “preventing harm”.
The Path to a Preventive Control Society
The social and cultural implications of European digital regulation go far beyond technical issues. We are witnessing a fundamental transformation of the public sphere — a space where spontaneity and openness once reigned, now dominated by calculated caution and preventive suspicion. Trust as the basis of public dialogue is giving way to algorithmic control, and cultural diversity is dissolving in unified standards of moderation.
In this new digital ecosystem, a fundamentally different type of subjectivity is emerging — the digital citizen (homo digitalis), for whom self-censorship is becoming de facto second nature, conformity is his survival strategy, and the constant presence of invisible surveillance is the usual background of his existence. This subject has learned to anticipate the reactions of algorithms, internalize their logic and adapt his behavior to the requirements of a system that promises security in exchange for predictability.
The public sphere ceases to be a space of the unexpected and the provocative — qualities historically necessary for the vitality of a genuine democratic culture. Instead, it is transformed into a carefully managed environment in which the boundaries of what is permitted are determined less by legal norms than by technical parameters and administrative procedures that are far removed from direct democratic participation.
This new transformation reflects a broader shift in the understanding of power, anticipated by Western theorists like Michel Foucault: power manifests itself not only in direct prohibitions, but also in the management of behavior through preventive, algorithmic systems of control. Power becomes even more distributed, dissolved in processes, at once elusive and omnipresent.
The key question here is not the need for regulation as such — it is clear that the unregulated digital environment has created its own problems. The question is about the price that European society is prepared to pay for imaginary security, and whether the medicine is turning into poison, undermining the vitality of European culture. When the public sphere is so regulated that there is no room for the unexpected and provocative, it ceases to fulfill its main function — to be a space for the formation of the political will of citizens.
We are probably on the verge of creating a society of total preventive control. The future will show whether European society will be able to find a balance between security and freedom or will ultimately choose the comfort of managed predictability.
Possible scenarios for the future European digital order (Conclusion)
The conducted research clearly demonstrates that the digital policy of the European Union is not a simple technical regulation, but the formation of a fundamentally new political order. Analysis of the evolution of legislation reveals a consistent transformation, starting with the rhetoric of protecting rights to the creation of institutional control mechanisms, from declared transparency to practical controllability of the public sphere.
It is clear that the Digital Services Act, which came into full force in February 2024, has created the basis for comprehensive digital surveillance. Despite the lack of consensus, the proposals for Chat Control 2.0 continue to evolve and could be adopted as early as October 2025 under the Danish Presidency. The AI Act, which came into force in August 2024, completes the regulatory triad covering the entire ecosystem of digital interactions in the EU.
This normative architecture is characterized by a fundamental contradiction, since the formal strengthening of rights is accompanied by their material limitation through technical standards and administrative procedures desired by European politicians. The rights to privacy and freedom of expression are not abolished, but their content is determined by algorithmic decisions made within the framework of opaque coordination processes between supranational institutions and technology corporations.
The future trajectory of the European digital order is not predetermined. An analysis of current trends allows us to identify four key scenarios, each containing specific mechanisms, indicators and opportunities for timely intervention.
1. Inertial fixing scenario
This scenario assumes a deepening of current processes without significant adjustments. The expansion of the DSA to all digital services is accompanied by the adoption of delegated acts on research access to data. The requirements of the AI Act for general-purpose systems are gradually extended to a wider range of applications.
Platforms adapt to new regulatory requirements through the centralization of moderation processes and the mass automation of decision-making. A study by Vanderbilt University shows that this strategy leads to systematic excessive content removal — platforms prefer to “play it safe” by removing even legally permissible materials in order to avoid regulatory sanctions.
For ordinary users, this transformation means living in a new digital regime: preventive scanning of personal messages before their encryption becomes the technical norm, algorithms increasingly aggressively filter “borderline” content, located in the gray area between permitted and prohibited, and users get used to mechanically accepting an endless stream of consent notifications without thinking about their content. The result is a general atmosphere of digital caution. Users learn unwritten rules: it is better to say less than to risk algorithmic suspicion. Lively debate will give way to safe platitudes. In public debate, the least controversial opinion, rather than the most convincing one, will increasingly win out — as it does today.
The main danger of this path is the creation of systems that cannot be abandoned. Mass surveillance technologies, once embedded in digital infrastructure, become part of its foundation. Dismantling them means rebuilding the entire architecture of the Internet, which requires colossal resources and political will that may not be available.
We are moving toward the no return point gradually, through the accumulation of subtle changes. First, platforms automate most moderation, supposedly for efficiency. Then researchers are given access to data only through controlled interfaces, supposedly to protect privacy. Users win arguments with algorithms less and less, supposedly because the systems have become more accurate.
But the warning signs are already visible: when more than 70% of content decisions are made by machines, when “open data” exists only on paper and real access is limited to a small circle of accredited researchers, when appealing moderation decisions becomes a formality with predictably low chances of success.
2. Scenario of reactive correction
This scenario assumes self-regulation through crisis and is based on the assumption that institutional equilibrium is disrupted through judicial precedents and political crises of confidence. The European Court of Human Rights’ February 2024 ruling that weakening end-to-end encryption “cannot be considered necessary” creates a legal basis for reconsidering the most controversial provisions.
Unexpected alliances are emerging. Digital rights defenders find allies among academic researchers. National regulators are starting to resist supranational pressure. Courts are demanding that authorities provide more convincing arguments for each new restriction.
Mass surveillance practices are put under the judicial microscope. Administrative expediency gives way to legal procedures. Algorithms are required to explain their decisions. Platform audits are no longer a monopoly of regulators; universities and civil society organizations are included on an equal footing. These changes come at the cost of slowdowns and complexity. Every decision requires justification, every measure requires judicial review. But the result is that what seemed irretrievably lost is being restored: trust in institutions and the balance between security and relative freedom.
3. Scenario of normative fragmentation
This scenario paints a picture of a “Europe of different speeds” in the digital sphere. Formally, all EU countries are subject to the same rules, but in practice, each interprets them in its own way, based on national interests and political traditions.
Cracks in unity are already visible today. Some countries strictly follow the spirit of European privacy regulations, while others find ways to circumvent them, citing national security concerns. Against this background, a “mosaic of digital rights” is emerging. A user from Berlin lives by one set of rules, a resident of Warsaw by another, although formally both are protected by the same European legislation. Companies are starting to “jurisdiction shopping”, choosing countries with the most lenient regulations for hosting servers and registering businesses.
The single market is bursting at the seams. Cross-border conflicts of norms are multiplying, and pan-European standards are being eroded by national exceptions. But there is also a hidden logic to this chaos. Different approaches are becoming natural experiments, conditional laboratories for testing which models of digital regulation work and which do not. The most successful practices can then be scaled up across the Union.
Symptoms of fragmentation are already visible today, as national DSA coordinators interpret the same norms in radically different ways, platforms enter into special agreements with individual countries, and international courts are swamped with disputes over the application of European law.
4. Scenario of institutional transformation
The most radical scenario involves a complete overhaul of digital power, from vertical control to horizontal cooperation. Instead of officials in Brussels dictating the rules, universities, journalists, activists, and programmers create them together.
Imagine a world where every algorithm works like an open book. Public registries show how moderation decisions are made. Any researcher can reproduce the results and check whether the system discriminates against certain groups. Appeals are no longer theater — now you can actually understand the logic of the machine and challenge it.
Power is dispersing. Technical standards are set not by ministries but by multilateral committees, where each side has veto power. Universities check algorithms for bias, journalists test them for censorship, programmers monitor security.
It will be expensive and difficult. New institutions must be created, powers must be redistributed, and transparency infrastructure must be built. Many officials and corporations will resist, losing control. But the result may justify the costs: a system that is trusted because it can be verified. The conflict between human rights and technological progress disappears when citizens become co-authors of the digital future, rather than its passive objects.
5. Mixed configurations and strategic divarication
It is important to understand that the scenarios are not mutually exclusive. They can coexist in different sectors and at different levels, forming hybrid configurations. These scenarios are not alternatives, but elements of one complex system. In reality, they are intertwined: somewhere the status quo is maintained, somewhere the courts force the authorities to retreat, somewhere countries go their own way, and somewhere new forms of governance are born.
You can imagine digital Europe as a patchwork quilt. In one sector, platforms continue to automate moderation using old patterns. In another, court decisions force them to open up their algorithms. In a third, national regulators experiment with their own approaches. And in a fourth, researchers, journalists, and activists create new models of shared control. This diversity may seem chaotic, but there is logic to it. Fragmentation encourages experimentation, judicial adjustments prevent the system from becoming ossified, and inertia ensures stability where change would be disruptive.
The key is not to guess the future, but to create a system that can learn from its mistakes. Mechanisms that allow you to undo bad decisions, verify research results, give a voice to all interested parties, and avoid dependence on closed procedures.
Future divarication
The European digital order has already irreversibly changed the practices of communication, journalism, scientific analysis and everyday use of technology. It has formally strengthened the guarantees of rights and simultaneously brought administrative control closer to the private sphere. The choice of further trajectory determines a fundamental question: will “privacy” and “freedom of expression” remain living norms with transparent mechanisms of protection, or will they finally turn into legal decorations for systems where content is determined by algorithms and regulations inaccessible to verification and living, human participation. The sustainability of the European political system in the digital age will only be ensured if technical standards and access procedures are embedded in political processes that are open to citizens and reproducible for researchers. If this condition is not met, a unified digital policy will provide order and predictability, but at the cost of narrowing the space for autonomy — the quality that fundamentally distinguishes democracy from an effectively managed digital discipline.
The material reflects the personal position of the author, which may not coincide with the opinion of the editors.
© Article cover photo credit: Wikimedia Commons