Democratic Deficit in Digital Governance: From Rights Protection to Administrative Concentration of Power — I

Six months after the publication of the first two parts of the study on new EU digital regulations, the practical consequences of legal changes in the European Union can be objectively assessed, and the dynamics of the first quarter of 2026 make it possible to trace the development of key trends.
First, due to the transparency requirements of the DSA, independent researchers have faced difficulties in accessing data. The situation is exacerbated by the reaction of the platforms themselves: the imposition of a fine on the company X for failing to provide access to advertising data has demonstrated that major players prefer the risk of sanctions to information disclosure.
Second, the collision of EU, US, and UK legal norms has deepened, escalating into an open confrontation. In February 2026, a new US Congress report explicitly accused the European Commission of a decade-long campaign of global censorship. Meanwhile, European developers have started relocating their businesses to jurisdictions with more favorable conditions.
Finally, according to the European Union’s ProtectEU strategy, it is precisely for 2026 that the development of a technological base providing law enforcement agencies with access to encrypted communications is scheduled.
Part III of a series of investigations into cognitive policy in the European Union.
The previous part looked in detail at how the Digital Services Act (DSA) has become a central instrument in shaping Europe’s new digital order based on media perception management.
Next, we will examine a serious internal paradox of the new digital regulation that has taken hold in the European Union. On the one hand, it increases transparency and accountability of platforms, but on the other, it increases state control and creates structural barriers. In practice, this state of affairs can lead to two main consequences:
1. Monopolization of knowledge. Complex procedures (like accreditation of researchers in DSA) cut off small organizations and independent experts, concentrating access to data in the hands of large players.
2. Monopolization of the market. High costs (like certification in AI Act) become an insurmountable barrier for startups, suppressing competition and innovation and strengthening the position of tech giants.
Below is a study of the transformation of the European digital space. Possible scenarios for its further changes will be considered in the continuation of the material.
I. Digital Law and Human Rights
The European human rights protection system is built on two main documents: Article 8 of the European Convention on Human Rights, which guarantees the right to respect for private and family life (Convention for the Protection of Human Rights and Fundamental Freedoms, 1950), and Articles 7, 8 and 11 of the EU Charter of Fundamental Rights on privacy, data protection and freedom of expression [1] (Charter of Fundamental Rights of the European Union, 2000). However, new digital regulations create an unprecedented tension between these fundamental guarantees and the practical needs of digital regulation, revealing the fundamental dilemma of modern regulation: the greater the technical possibilities for control, the more difficult it is to preserve the basic principles of the rule of law. The transition from targeted to mass monitoring changes the very nature of the right to privacy — from protection from specific interference to an exception to a system of general surveillance.
The Digital Services Act is intended to balance security and rights, but its implementation relies on a centralized system of access control to key information. After the DSA began to apply to all online platforms and search engines, the European Commission adopted a delegated act on researchers’ access to data. The document opened a single procedural gateway for accredited teams studying systemic risks. It simultaneously enshrined strict selection criteria and regulated the formats in which platforms are required to share internal data sets. At the same time, the coordinator is obliged to inform the European Commission and the board of coordinators. This expands external audit, but indirectly transfers control over initial clearance and the list of data sets to be disclosed to a supranational body.
Thus, the problem of bias and selectivity in the choice of topics and performers is embedded in the very architecture of this system, which makes objective fact-checking much harder.
Experience shows that research requests are heavily concentrated in the Irish Digital Services Coordinator, one of the key bodies empowered to accredit researchers to access data from many major platforms. This creates a bottleneck where the institutional preferences of one regulator can shape research across the EU.
At the same time, researchers face a “data paradox”: to justify a request, they must describe in detail the data they need without knowing its structure in the platform systems. Even though the act requires very large platforms (VLOP/VLOSE) to clarify the structure and logic of algorithms, access to these clarifications is possible only upon request from the regulator and only for accredited researchers through a multi-stage bureaucratic procedure.
A different vector is set by the AI Act, which introduces horizontal rules for artificial intelligence systems and separate responsibilities for developers of general-purpose systems. The legal text emphasizes the need for risk assessment, model manageability, and documentation of the data life cycle. For human rights, this means introducing privacy and discrimination checks as a mandatory norm. At the same time, the high cost of compliance becomes a barrier that distorts the balance of power in favor of those who can afford it: creating a quality management system for a high-risk AI system costs €193,000-€330,000 with an additional annual cost of €71,400, which is a routine expense for a large player, but for a small team it can become a barrier to entry. Therefore, the balance between safety and innovation is not only a question of principles, but also of resources that allow you to withstand regulatory scrutiny and certification of high-risk applications.
This is why the overall policy intent of protecting rights must be accompanied by control over distributional effects, otherwise the standards will de facto consolidate the concentration of technological and regulatory power in the hands of a few actors. An unintended consequence could be the effect of so-called “regulatory squeeze”: startups and research teams transfer the development of AI systems to jurisdictions with less stringent requirements, which undermines the goal of the act to create “trustworthy AI” in Europe.
In the European media sphere, the implementation of the European Media Freedom Act (EMFA) was presented as strengthening the protection of sources and editorial independence. At the same time, the law strengthens the regime of ownership transparency and introduces instruments for oversight of interference in editorial decisions, which requires constant scrutiny of freedom of expression and the absence of hidden incentives for self-censorship. The EMFA enforcement architecture is based on the European Media Services Council, a group of national regulators, and on a system of opinions and recommendations that are formally non-binding but create reputational pressure on violators. This means that the actual strength of these mechanisms will depend on the willingness of the European Commission to apply infringement procedures against Member States that do not comply with the act.
The European Parliament stresses the goal of strengthening democracy and journalism, but this is where the question of practice and its compliance with the standards of necessity and proportionality arises, since the assessment of interference in editorial processes now inevitably moves to the level of administrative procedure. The risk is that the formalization of protection could literally weaken the real independence of the media if procedural requirements become an instrument of bureaucratic control.
So, to sum it up, the new regulatory framework creates a paradox: it simultaneously makes platforms more accountable to society and increases government control over digital communications. The DSA opens up access to internal platform data to study algorithmic risks, but the accreditation procedure for researchers is so complex that only large universities with administrative resources can pass it, leaving small research groups or independent journalists out in the cold. As I wrote earlier, the AI Act requires certification of high-risk systems costing €193,000-€330,000. For Google or Microsoft, this is, again, a routine expense, but for a startup, it is an insurmountable barrier to entry, resulting in innovation concentrated in the hands of tech giants.
The EMFA protects journalists from spyware, but complaints procedures and accreditation of “independent media” may be limited to large newsrooms with legal departments, while smaller outlets risk losing their voice. The legal sustainability of this system will depend on the practical application of the principles of necessity and proportionality. The critical question here is whether procedural requirements such as accreditation, certification, and bureaucratic filters will become a tool for genuine rights protection or a new way of concentrating influence in the hands of those who can afford the high costs of compliance. If access to protective mechanisms is determined by financial and administrative capabilities, then the rhetoric about rights will become a screen for the redistribution of power in the digital sphere from small players to large corporations and institutions.
II. Geopolitical intersections
European digital law has become a subject of foreign policy games and has found itself at the center of a new round of transatlantic competition of norms. In August 2025, coordinated attempts to influence the application of European norms were recorded in the United States. The diplomatic cable of August 4, 2025, signed by Secretary of State Marco Rubio, instructed American diplomatic missions to step up lobbying against the DSA, appealing to the risks to free speech and the interests of American companies. Substantively, this signal was reinforced by the Republicans in Congress, who released a report on so-called foreign censorship on July 25, 2025, in which the DSA is presented as a mechanism for influencing the American public sphere. Tensions were further heightened by a letter from Federal Trade Commission Chairman Andrew Ferguson of August 21, 2025, which warned major platforms against implementing European legislation in a way that undermines the rights of users in US jurisdiction.
American initiatives against European regulation are unfolding on the Washington political scene with unprecedented intensity. On September 3, 2025, the House of Representatives held a hearing on “Europe’s Threat to American Speech and Innovation,” where the British Online Safety Act and the European DSA were criticized in the same row. This format of public pressure demonstrates that debates about moderation and encryption are becoming an element of the transatlantic competition of norms and values. It is significant that the maximum fines under the DSA are 6% of a company’s global turnover, and under the British OSA — 10% or £18 million, which effectively means that European regulators collect the bulk of the fine from the American revenues of tech giants.
UPD: The escalation continued in December 2025: the Trump administration imposed visa sanctions against former European Commissioner Thierry Breton — a key architect of the DSA — and four representatives of NGOs from the UK and Germany, accusing them of participating in a global censorship campaign aimed at American digital platforms. In February 2026, the House Judiciary Committee published the second part of the report “The Foreign Censorship Threat”, where, based on documents from Big Tech, it also accused the European Commission of a decade-long campaign of global censorship, including influencing freedom of speech within the US.
In parallel to these processes, a global trend towards restricting encryption in adjacent jurisdictions is gaining momentum. In the UK, Ofcom has been given tools to implement the Online Safety Act in practice, including the ability to require services to install accredited content monitoring technologies before encryption. This essentially allows for mass client scanning and has raised objections from the tech community, especially given that the Wikimedia Foundation has filed a lawsuit against Wikipedia’s potential recognition as a “category one” service with the strictest requirements.
Within the European Union, the data access policy has received a new dimension in the ambitious ProtectEU strategy presented in April 2025. The European Commission presented a roadmap for so-called lawful access, which provides for clear time frames: in 2025, an impact assessment of data retention rules, in 2026, the preparation of a technology roadmap for encryption solutions, and by 2030, the development of a new generation of decryption capabilities for Europol. This effectively constitutes a long-term program to weaken the practical invulnerability of end-to-end encryption in favor of law enforcement access, while formally complying with the legal tests of necessity and proportionality.
Of particular concern is also the fact that the recommendations come from a High Level Group composed primarily of law enforcement officials, without equal representation of cybersecurity and human rights experts. This creates a systemic imbalance in favour of increased surveillance powers at the expense of technical and legal safeguards.
Taken together, these lines add up to a structural jurisdictional conflict with far-reaching implications for the global digital order. Platforms and infrastructure providers face competing demands, creating what might be called a “regulatory trilemma”: it is impossible to simultaneously satisfy American free speech standards¹, European demands for transparency and risk management, and the English-Australian model of forced access to encrypted communications.
The specific contradictions are as follows: in the US, the First Amendment is designed to protect even controversial speech from government censorship, while the European DSA requires the active identification and removal of “illegal content” and “disinformation.” The UK’s Online Safety Act (as well as Australia’s Assistance and Access Act) go even further, requiring platforms to technically provide law enforcement access to encrypted messages, which runs counter to both US privacy principles and European data protection standards.
As a result, Meta* has already claimed that DSA amounts to censorship of their platforms, and big tech companies are being forced to create different versions of their products for different jurisdictions, a practice known as “digital balkanization”. Telegram has threatened to leave the European market entirely rather than implement “backdoors in encryption”, while the UK authorities are demanding precisely such technical solutions.
UPD: On December 8, it became known that Meta had made concessions to the EU and changed its advertising settings for users from the European Union starting in January 2026 to avoid new fines. At the same time, Elon Musk’s platform X received a €120 million fine for transparency violations under the DMA — the first such penalty after a two-year investigation. This provoked a sharp reaction from Musk: he called for the “abolition” of the EU and blocked advertising from the European Commission on his platform. Two other EU investigations against X are still ongoing.
All of these processes generate regulatory friction, stimulate the search for business-friendly legal systems, and increase the likelihood that the most vulnerable party will be the user, who formally receives more guarantees, but in fact lives in an environment of increasing preventive control and ambiguous procedures for accessing private information. It is quite interesting that the Five Eyes countries, traditionally advocating for internet freedom against authoritarian regimes, are simultaneously creating their own digital control infrastructure, using similar arguments about “national security” to their opponents.
The long-term consequences of this regulatory schism may be more serious than initially expected. The fragmentation of the global Internet along jurisdictional lines not only undermines the universality of digital rights, but also creates the preconditions for a technological “cold war,” where cybersecurity and privacy issues become instruments of geopolitical pressure. In this context, the protection of human rights risks becoming hostage to competition between models of digital governance, as each side presents its approach as the only democratic and secure one.
III. Redistribution of power within the EU: digital sovereignty as a tool of centralization
European digital regulation has transformed from a technical tool into a mechanism for a fundamental redistribution of power between national capitals and supranational institutions. This metamorphosis is most clearly evident in the architecture of the Digital Decade Policy Programme 2030, an ambitious digital transformation programme enshrined in Decision (EU) 2022/2481 of 14 December 2022.
Formally, the programme declares the achievement of “common goals” of digitalisation — from the development of basic digital skills among 80% of the population to the implementation of cloud technologies, artificial intelligence and big data in 75% of European companies by 2030. However, behind this technocratic rhetoric, one can trace a deeper institutional transformation. The annual reporting mechanism built into the programme effectively establishes a vertical accountability regime: Member States are required to submit detailed national roadmaps to the European Commission, and the Commission is given a mandate not only to assess their compliance with pan-European goals, but also to coordinate corrective measures through a new toolkit of Multi-Country Projects (MCPs).
In the political-theoretical dimension, this means a transition from the model of “divided sovereignty” to a system of “managed convergence”, where digital policy, traditionally within the sphere of national parliamentary control, is transformed into a subject of supranational administration. Brussels is becoming not only the coordinator, but also the architect of the continent’s digital future.
Technological sovereignty as a political project
The concept of technological sovereignty, articulated in the European Parliament Resolution A-10-2025-0107 of 11 June 2025, is a paradoxical construction. The report prepared by the Committee on Industry, Research and Energy (ITRE), states that the EU is more than 80% dependent on external suppliers for digital products, services and intellectual property. However, the proposed solution — strengthening control over critical technologies through “democratic institutions” — in practice does not mean strengthening national democracies, but rather consolidating power in the hands of a supranational bureaucracy.
The result is a “double centralization” effect: Member States lose influence over the formation of strategic priorities, and citizens are deprived of the opportunity to influence digital policy through national elected institutes. The democratic deficit, long discussed in the context of European integration, takes on a new dimension in this context in the era of digital transformation.
ECAT: Institutionalizing Algorithmic Power
All things being equal, the creation of the European Centre for Algorithmic Transparency (ECAT) in April 2023 marks the emergence of a fundamentally new type of regulatory authority – expert-algorithmic. Formally, the centre, which operates within the structure of the Joint Research Centre (JRC) of the European Commission in Seville, is called upon to provide scientific and technical expertise for the application of the Digital Services Act (DSA).
However, it is interesting to note that ECAT’s real powers go far beyond technical support. The Centre has a mandate to access the algorithms, internal data and content moderation systems of the largest online platforms (VLOPs) and search engines (VLOSEs). The interdisciplinary team of data scientists, AI experts, sociologists and lawyers effectively gets the opportunity to look “under the hood” of digital giants, assess systemic risks and prescribe measures to mitigate them.
Crucially, this mediation between platforms and society is vertically embedded in the structure of the Commission. It is Brussels that determines what data will be made available to researchers through the Article 40 DSA mechanism, in what format and with what restrictions. Transparency is thus transformed into a centralized resource, distributed from a single center. A new and dangerous form of inequality in access to information is emerging: supranational institutions receive a full picture of what is happening in the digital environment, while independent researchers and the public can see only a carefully filtered and dosed version of reality.
This asymmetry creates a situation where objective fact-checking becomes virtually impossible — the fact-checker is deprived of access to primary information and cannot assess the completeness or bias of the data provided. In essence, this leads to the institutionalization of systemic bias, in which the only reality that can be verified is the one that has been constructed and approved in advance by the data “gatekeepers” in Brussels.
Compliance Economics: Cost Inequality Built into the System
Once again, it is worth emphasizing in this context that the architecture of European digital regulation creates a fundamental asymmetry in the distribution of compliance costs. Research by ECIPE and other think tanks clearly shows that small economies and companies bear a disproportionately high share of the costs of implementing new regulations — from the DSA to the AI Act. This observation is also confirmed by more recent studies by BusinessEurope, CCIA, and ITIF.
For multinationals, compliance costs are becoming manageable operational costs integrated into business models. Large platforms create specialized units, hire armies of lawyers and lobbyists, and develop automated systems to meet regulatory requirements. At the same time, for small and medium-sized enterprises (SMEs), these same requirements often turn into insurmountable barriers to market entry.
The paradox is that regulation, declared as a tool to limit the dominance of Big Tech, in practice strengthens their position. Raising the regulatory threshold creates the effect of a “regulatory moat” that protects established players from potential competitors. As a result, Europe’s dependence on American and Chinese tech giants does not weaken, but is preserved in a new institutional form.
Media Freedom Act: Freedom under surveillance
The European Media Freedom Act (EMFA), which entered into force on 7 May 2024 and is fully applicable from 8 August 2025, is an ambivalent construction. I have already mentioned that, on the one hand, the law introduces unprecedented guarantees to protect journalistic sources, limits the use of spyware against the media, and requires transparency in the distribution of state advertising. But on the other hand, the law expands the tools of administrative oversight of media processes.
The creation of the European Board for Media Services in February 2025 institutionalizes a supranational level of media regulation. The Council is empowered to coordinate the actions of national regulators, assess the impact of media concentrations on pluralism and editorial independence, and act as an arbitrator in disputes between platforms and media providers.
As a result, national media regulators and professional communities find themselves embedded in a hierarchical system, where the framework for freedom of speech and editorial autonomy is set at a supranational level. This creates the risk of unifying media standards to the detriment of national and cultural diversity — one of the fundamental values of the European project.
The architecture of the “new power”: from multi-level governance to the bureaucratic centralism of Brussels
The combination of the described processes forms the contours of a new power architecture in the EU, where digital regulation acts as a catalyst for deep institutional transformation. The multi-level governance model, which assumed a complex interaction between national and supranational actors, is being replaced by a system that can be described as “bureaucratic centralism with a digital face.”
In this system, Brussels controls not only the regulatory framework, but also the interpretation of data, the allocation of resources, and the determination of development priorities. Nation states are transformed into executors of centrally developed strategies, and citizens are transformed into controlled users of a system in which key decisions about the boundaries of what is acceptable in the digital space are made by a remote technocratic elite.
The threat lies not so much in the regulations themselves — many of which respond to the real challenges of the digital age — as in the institutionalization of long-term power asymmetries. Each new regulatory act increases the dependence of national systems on supranational infrastructure, creating an institutional lock-in, from which it becomes increasingly expensive to escape.
Europe is moving towards a highly governable and predictable digital single market. This order promises protection against systemic risks, algorithmic accountability, and respect for fundamental rights in the digital environment. But the price of these achievements could be the erosion of democratic control: public space risks becoming a closed system, where acceptable boundaries are set by algorithms, expert assessments, and administrative decisions made in departmental labyrinths inaccessible to citizens.
If the current trajectory continues, Europe risks ending up with a technically flawless digital order devoid of democratic content – a kind of “digital Leviathan” that offers security and stability in exchange for giving up national sovereignty and the possibility of real public participation in determining the future of the digital environment.
The Erosion of Democratic Subjectivity in the Digital Age
The fundamental transformation of democratic participation within the European digital order manifests itself not so much in the formal restriction of voting rights, but in the structural shift of the locus of decision-making beyond the reach of traditional democratic mechanisms. When the content of the right to privacy is determined through the technical specifications of moderation algorithms and the boundaries of freedom of expression are set by delegated acts on “systemic risks”, citizens find themselves in the position of passive recipients of decisions taken in institutional spaces where their voice is structurally inaudible. A voter can de jure influence the composition of the European Parliament, but not the machine learning parameters that determine the visibility of his or her posts on social media. A national parliament can criticize the DSA, but it cannot change the algorithmic thresholds by which content is classified as “potentially harmful”.
Technocratic captivity of democratic institutions
Taking our analysis into account, we can highlight the peculiarity of the current situation, which is that democratic institutions do not disappear, but fall into a complex dependence on technocratic interpretations that are presented as “objective” and “data-based”. The European Commission justifies the expansion of ECAT’s powers by the need for “scientific and technical expertise”, national coordinators of digital services refer to “best international practices”, and platforms, in turn, appeal to the “limitations of technology” when explaining moderation decisions. In this configuration, political questions about the desired character of the public sphere are disguised as technical problems of algorithm optimization. The result of this entire process is not the abolition of democracy, but its structural devastation: formal procedures are preserved, but their impact on the real conditions of citizens’ digital life becomes indirect and mediated by multiple layers of technocratic interpretation. Democratic institutions are transformed into ratifying bodies for decisions made within the framework of the “objective” logic of technical regulation. The only question is who or what is behind the adoption of these technological decisions.
[1]ECHR Article 8: “Everyone has the right to respect for his private and family life, his home and his correspondence.”
EU Charter: Article 7: Respect for private and family life. Article 8: Protection of personal data. Article 11: Freedom of expression and information.
* Banned in the Russian Federation.
The material reflects the personal position of the author, which may not coincide with the opinion of the editors.
© Article cover photo credit: Wikimedia Commons