[ Read other important essays, analysis and reports in the Acton Institute blog ]
New technologies always carry a degree of uncertainty. If a centralized regulatory body must first approve a technology’s safety and use, autonomy and creativity become increasingly irrelevant.
The global discussion on artificial intelligence has ceased to be a technical debate and has become a first-rate political conflict. In 2024, the European Union enforced the EU AI Act, the world’s most comprehensive AI regulatory framework, placing strict compliance requirements on developers and companies before their products reach the market. In the United States, executive orders have directed federal agencies to evaluate AI risks across sectors ranging from healthcare to national security. At the United Nations, calls for an international AI governance body grow louder with each passing summit. As this technology becomes the cognitive infrastructure of contemporary economies, the pressure for centralized control is accelerating at a pace that deserves serious scrutiny.
The expansion of artificial intelligence has not only ushered in a global technological revolution. It has also rapidly triggered a profound institutional transformation that alters the relationship between the State, the market, and civil society. As this technology becomes the cognitive infrastructure of contemporary life, that is, as it becomes the foundation for the development of markets, the tendency grows for its configuration to be defined by centralized power structures: the State.
This movement points to something broader and more structural than conventional regulation. It reflects a technocratic logic of power, a model in which decisions affecting economic and social life are delegated to experts who present themselves as holders of superior knowledge and, therefore, authorized to define what is risk, what is acceptable, and what should be limited. This phenomenon intensifies precisely around AI, because the technology produces effects that escape common social understanding, creating opportunities for technical authorities to claim jurisdiction over nearly the entire sphere of innovation. The result is that the citizen is excluded not by force but by complexity.
In constitutional democracies, power is based on defined norms, the division of governmental functions, and legitimacy obtained through elections. In the technocratic model, this legitimacy is conferred upon technical knowledge. Experts, associated with the government or large corporations, serve as intermediaries between citizens and the understanding necessary to decipher the complexity of the digital universe. This intermediation converts knowledge into political influence. When the EU AI Act designates its own regulators as the authoritative arbiters of what constitutes a high-risk system, or when a federal agency determines by decree which AI applications require prior authorization, democracy does not disappear. It simply becomes less relevant.
The reason for this power is based on a unique narrative of risk. Artificial intelligence is seen by many as something that can be unpredictable and, therefore, must be regulated before it causes permanent structural damage. This idea generates a significant shift in understanding. Instead of public policies responding to real situations, they begin to act based on anticipated theoretical scenarios. Assumption becomes the basis for intervention. Potential is used as justification for control. This model breaks with the tradition of freedom that assumes individual action is considered acceptable until proven otherwise.
This debate intensifies because a significant portion of those who advocate for the regulation of artificial intelligence argue that it is the first technology with the real potential to surpass human cognitive power and, at some point, dominate its own creations. Whether this hypothesis is true, nobody knows. Some argue that this discourse also serves as a convenient pretext for expanding the power of the technocratic state, although there is not enough clarity to affirm this.
The central point is that this existential narrative creates the perfect environment for preventive interventions. Critics of regulation risk being delegitimized as reckless or indifferent to catastrophic risk, regardless of the quality of their arguments. The existential frame functions as a conversation stopper: If the stakes are civilization itself, who would dare oppose oversight?
Preventive regulation involves establishing restrictions before an action occurs or damage manifests itself. This concept contrasts with the fundamental premise of liberal economics, which values experimentation, errors, and spontaneous discovery. New technologies always carry a degree of uncertainty. However, the trajectory of innovation reveals that societies progress when they allow for diverse attempts and when the discovery process is not stifled by prior state requirements.
By requiring that each AI innovation receive approval from a technical committee before deployment, the State eliminates individual creative freedom. What emerges in its place is a culture of authorization that exchanges autonomy for caution and entrepreneurial risk-taking for institutional dependence. A young developer in São Paulo or Lagos with a novel AI application does not face a level playing field. She faces compliance lawyers, certification processes, and registration fees she cannot afford. The model diminishes the creative capacity of society and transfers decision-making power to administrative structures entirely disconnected from competitive reality.
This dynamic becomes clearer when we observe how different societies respond to innovation. Some cultivate a problem-solving ethos, where creativity is directed toward building new solutions and expanding technological frontiers. Others develop a problem-finding mentality, in which existing solutions are treated primarily as sources of potential risks that must be contained. This contrast mirrors the structural tension created by preventive regulation. It encourages a bureaucratic reflex that questions innovation before it exists, replacing exploration with suspicion and turning creativity into an activity that requires prior justification rather than free expression.
An additional risk of artificial intelligence regulation is the possibility of capture. Because complex technical standards require significant resources, only large companies can comply with them. Small and medium-size enterprises, universities, and independent researchers face prohibitive costs. This creates an institutional selection effect in which a few agents come to dominate the technological ecosystem.
Although this seems like an economic issue, its roots are political. The concentration of technology creates partnerships between the State and large companies. The official justification is the security and protection of citizens. In practice, this results in a scenario where civil society loses its importance and independent innovation becomes an exceptional occurrence. It is a process of alignment between the state bureaucracy and technological oligopolies, where both act as filters of social creativity.
Many regulatory proposals include mechanisms to monitor content generated by artificial intelligence. Terms such as harmful content, sensitive information, or social risk are commonly found in official documents. The difficulty lies in the malleability of these definitions. There is no clear consensus on what constitutes an informational risk, and this lack of clarity allows for subjective and arbitrary interpretations.
When the State or regulatory entities assume the role of filtering algorithmic content, a silent reorganization of the public sphere occurs. The plurality of ideas is replaced by standardized criteria of acceptability. Public debate loses heterogeneity and gains centralized mediators. The institutional consequence is profound. The Technocratic State does not only control technologies. It controls the flow of information that structures social deliberation itself.
The logic of preventive protection assumes that people would not have the capacity to assess risks and use technologies responsibly. This reasoning transfers responsibility from the individual to the State. The citizen ceases to be a moral agent and comes to be seen as someone permanently vulnerable who needs continuous protection. This conception compromises the philosophical foundations of freedom, which are based on the capacity of each individual to act, experiment, and learn.
The risk lies not only in the practical limitations of technology but also in the formation of a social vision where autonomy is considered a threat. This cultural shift strengthens the Technocratic State by legitimizing its expansion into the moral field. If citizens cannot take responsibility, someone must assume that responsibility. This figure inevitably transforms into a central authority that claims power in the name of security.
Artificial intelligence marks the beginning of a period in which cultural, financial, and political decisions will be influenced by algorithm-based systems. The problem lies not only in the potential of the technology but also in the way societies structure their governance. If the solution involves technical centralization, the consequence will be the gradual loss of autonomy and creativity. If the solution prioritizes decentralized structures and individual responsibility, the technology will function as a productive engine that expands opportunities.
The Technocratic State represents a new invisible risk of the 21st century, as it does not present itself as a conventional political authority. It appears as a technical solution that seems inevitable. Recognizing this change is the first step to maintaining freedom. The second is to understand that freedom is not about lacking regulation but rather about limiting power. The third is to ensure that the future of artificial intelligence is shaped by a society capable of thinking, creating, and disagreeing, instead of being shaped by centralized structures that seek to control uncertainty through decrees.
History teaches us that great advances arise when human creativity finds room to breathe. The Renaissance flourished when artists and thinkers could explore the new without fear of censorship. The Enlightenment consolidated when reason gained its own light, independent of authorities that tried to control thought. Scientific revolutions were born when individuals dared to investigate what was considered impossible. Artificial intelligence, despite the uncertainties it provokes, carries the same potential for cultural and intellectual expansion.
If we resist the temptation of excessive control and preserve an environment of responsible freedom, it can open doors to a new cycle of discovery and collaboration. The future does not have to be a territory dominated by inertia or fear. It can be the next stage in humanity’s long tradition of transforming knowledge into progress, and it is up to us to ensure that the spirit that propelled the great moments of civilization continues to live on in the digital age.
____________________
Henrique Rokembach is a partner and vice president at HLB Brasil. He also serves as a board member of HLB International and is part of the International Assurance Committee (IAC). In Brazil, he is a member of the CRCSP New Leadership and Technical Audit Committees.
Comments powered by CComment