The geopolitical risks of artificial intelligence edit

27 novembre 2025

Despite warnings from leading scientists, philosophers, and international organizations, major technology companies have not slowed their march toward superintelligence (understood as a form of artificial intelligence that would easily surpass human cognitive abilities in almost any field). On the contrary, competition has not only intensified but has also imposed a dynamic of continuous acceleration. The absence of a truly binding international legal framework leaves this process, which advances with the relentless logic of technological power, without any checks and balances: whoever gets there first will not only control the scientific frontier of this century but will also acquire an unprecedented form of geopolitical superiority.

Far from any apocalyptic exaggeration, the most accurate warnings stem from concrete data that already support this hypothesis. In a matter of years, models that once handled marginal tasks now outperform human specialists, while data centers operate as autonomous laboratories where thousands of digital agents are perfected without the necessary oversight. Scenarios developed by respected experts, such as AI 2027, anticipate a world in which artificial intelligence begins to direct its own evolution, generating acceleration loops that no government can stop without sacrificing power against its rivals. The United States and China no longer compete for military or economic influence in the traditional sense, but for control of the digital architectures that will define the global order of the 21st century. In this race, the risk is not an openly hostile AI, but an indifferent one, capable of optimizing without considering whether human beings and democratic values ​​are indispensable variables. The perverse incentive to surpass the adversary at any cost will always be present. The dystopia becomes plausible not because of algorithmic malice, but because of the geopolitical logic that forces us to keep moving forward even when the abyss is perfectly visible – and, even worse – avoidable.

The transformations underway are already profound and visible. The warning from Nobel Pirze laureate Geoffrey Hinton—a pioneer of synthetic neural networks, the foundation of artificial intelligence—is being confirmed day by day: we could be nearing a world where distinguishing truth from falsehood is virtually impossible. Recent research dismantles the idea that generative systems can reliably identify and correct misinformation circulating on social media. The opposite is true: the likelihood of these models spreading misleading claims about current events has practically doubled in a year. In a recent study , 35 percent of AI-generated responses contained false information, while the proportion of unanswered questions fell from 31 percent in August 2024 to zero. This means that even when there is insufficient information to support a claim, artificial intelligence produces a response anyway, expanding an ecosystem where the boundary between verifiable and illusory information is rapidly blurring.

In this context, the concept of self-regulation takes on a frankly insufficient meaning, as it displaces states from their essential function of regulating and safeguarding the public interest. Sam Altman, CEO of OpenAI and co-founder of Worldcoin (a project that uses iris scanning to create a digital identity intended to verify the humanity of each user in environments saturated with bots and deepfakes, and simultaneously a cryptocurrency conceived as the financial infrastructure of the post-artificial intelligence era), perfectly embodies this paradox. On the one hand, he promotes increasingly powerful artificial intelligence models, capable of producing information that is not always verifiable; on the other, he proposes channeling the future economic benefits of this automation through a token issued and managed by his own ecosystem. This creates a closed circuit in which large platforms reserve the power to determine which risks are acceptable, how they should be mitigated, who receives a digital credential of “verified humanity,” and who can participate in the automated economy of the future. Self-regulation ceases to be a complement to public law and becomes a form of corporate governance over the digital sphere, in which those who contribute to creating the problem also claim the authority to decide how – and for whom – it is solved.

On a geopolitical scale, the first symptom of these trends could be the consolidation of a radically unstable information ecosystem. Disinformation originated as a tool of psychological warfare in the 20th century. The difference is that today there are autonomous models and swarms of bots that can industrialize it at low cost and with surgical precision. States seeking to change the international order in their favor already use digital campaigns, troll farms, and covert operations to erode trust in democratic institutions, polarize societies, and weaken alliances. With artificial intelligence, these operations can be designed from a desk, segmented by psychological profile, and executed at unprecedented speed, while the affected societies debate whether it is propaganda or freedom of expression.

Meanwhile, the race between the United States and China for dominance in artificial intelligence points to an ever-widening technological gap with the rest of the world. The powers that concentrate talent, data, and computing power will not only expand their economic and military advantage but will also impose their technical standards, platforms, and dependencies on other countries. Many developing nations—including several in Latin America (such as Mexico), Africa, and Southeast Asia—risk becoming, at best, data providers and captive markets for AI services designed elsewhere. Sovereignty will no longer be measured solely by territory but by a country's capacity to develop, audit, and eventually limit the systems that structure its economy, security, and public discourse.

The military dimension further accentuates this asymmetry. For example, the war in Ukraine shows that inexpensive drones can wear down conventional armies and air defense systems that cost millions. The next step—already underway—is the integration of surveillance, identification, and target selection systems into increasingly autonomous lethal platforms. From swarms of kamikaze drones to mass profiling schemes to decide who is the enemy and who is not, the boundary between human decision-making and automation is becoming dangerously thin. A global agreement that effectively prohibits or limits autonomous weapons seems unlikely in the short term: too many countries see them as an opportunity to compensate for strategic disadvantages. Regulatory frameworks may only emerge after a tragedy, when a massive failure attributable to these technologies exposes the unacceptable nature of this logic.

The pressure won't come only from outside. Within states, the temptation to use artificial intelligence for preventive surveillance will be difficult to resist. In the name of national security, the fight against terrorism, or internal stability, few governments—even democratic ones—will permanently relinquish the ability to monitor communications, movements, and behavior with unprecedented precision. The risk lies in a network of cameras, sensors, risk prediction models, and biometric databases that function as a digital nervous system for a weakened state, controlled by oligopolistic interests. Formally, constitutions might remain in force; in practice, freedom of movement, association, and dissent could be conditioned by opaque scores, risk lists, and automated decisions that are difficult to challenge.

If this trend continues, the world of the next couple of decades could be divided into three main categories of actors: powers capable of designing and controlling advanced artificial intelligence systems; countries that depend on these powers for their digital infrastructure, defense, and economic model; and territories transformed into gray zones where hybrid warfare is waged, new weapons are tested, and vulnerable populations are experimented on. In this scenario, “superintelligence” would cease to be an abstract concept and become a factor of power: the country or consortium that manages to deploy significantly superior and relatively autonomous intelligence could reconfigure the international system.

The fundamental question, then, is not only whether we can contain a hypothetical superintelligence, but whether the major powers will be able to restrain themselves in this race. In theory, nations could agree on limits among themselves: clear prohibitions on autonomous weapons, agreements to protect critical infrastructure, joint mechanisms for monitoring systems that affect elections or markets, and independent audits of models considered monopolistic and misaligned. But in practice, every geopolitical incentive points in the opposite direction: to get ahead of the rival, capture more data, train larger models, militarize sooner. The window for a governance framework that arrives before the next major crisis remains open, but it narrows with each cycle of innovation. If the 20th century was marked by the threat of mutual nuclear destruction, the 21st could be defined by something less visible but equally profound: the possibility that the race for superintelligence will unravel, piece by piece, the conditions that made democracy and national autonomy possible.

The Spanish version of this article was published by Letras Libres.