In recent years, from European Parliamentary elections to presidential campaigns in multiple countries, misinformation in cyberspace has once again moved to the center of international political debate. Unlike earlier waves of rumors or public opinion manipulation, the current surge of risk is largely driven by the rapid development of generative AI. The European Union Agency for Cybersecurity has classified the misuse of artificial intelligence in elections as a “hybrid threat.” In response, governments around the world have begun to expand institutional efforts aimed at regulating and overseeing generative AI technologies.
The development of generative artificial intelligence is transforming not only the spread of misinformation but also the fundamental logic of information production. Traditionally, misinformation often relied on real-world events and manual editing. Today, however, fabricated text, images, audio, and video can achieve a high degree of realism, increasingly demonstrating the capacity to construct alternative realities. As a result, the public faces growing difficulty in distinguishing truth from falsehood, placing greater strain on systems of information trust.
Moreover, large language models generate content based on probabilistic predictions. Biases embedded in training data, combined with the phenomenon of “AI hallucinations,” allow misinformation to emerge even without deliberate human manipulation, arising instead from structural uncertainties within the technology itself. These developments have pushed the governance of misinformation beyond the traditional paradigm of censorship and content removal, elevating it to the level of institutional design and global governance.
Because generative AI reshapes the production of information and requires the early establishment of regulatory norms, countries have adopted divergent institutional responses. The European Union emphasizes risk classification and regulatory leadership, strengthening transparency requirements and corporate accountability through legislation. Its objective is to build a controllable governance framework during the early stages of technological diffusion while projecting global influence through regulatory standard-setting.
Read More: Pakistan Announces $1bn Investment in Artificial Intelligence by 2030
The United States, by contrast, incorporates AI risks into a broader framework of national security and strategic competition while continuing to encourage technological innovation and market-driven development. It relies more heavily on administrative guidance and industry self-regulation to balance innovation with security concerns. China’s approach places stronger emphasis on platform accountability and proactive governance, integrating generative AI into its broader digital governance framework through institutional mechanisms such as algorithmic registration and synthetic content labeling.
The differences among these approaches go beyond variations in regulatory tools or techniques. They reflect deeper divergences in digital sovereignty, political traditions, and stages of technological development. Generative AI operates within a highly globalized technological ecosystem, where cross-border platform operations, fragmented legal jurisdictions, and asymmetrical enforcement capacities increasingly highlight tensions between digital sovereignty and global technological integration.
At the same time, countries differ in how they prioritize freedom of expression and information security, and the contrast between market-oriented and system-oriented governance models remains difficult to reconcile. More practically, core AI models and computing resources are concentrated in a small number of developed economies. The technological and institutional capacity gap faced by many developing countries limits their influence in shaping global rules. In short, the mismatch between the rapid diffusion of technology and the slower pace of institutional coordination has become a major obstacle to global consensus.
Despite these institutional differences, the risks posed by generative AI are inherently transnational, making misinformation a shared global challenge. In a context where unified rules are difficult to achieve and the costs of regulatory fragmentation continue to rise, establishing minimum coordination standards in areas such as content identification, transparency mechanisms, and risk assessment may offer a pragmatic path forward.
Read More: Kazakhstan Plans to Establish Artificial Intelligence University
For developing countries, strengthening governance frameworks is as important as advancing digitalization and technological capacity. Generative AI presents both developmental opportunities and governance challenges. Finding a balance between fostering innovation and ensuring security will be one of the defining tasks of future global information governance, and it will also be crucial for rebuilding public trust in the digital age.
Liu Guanchen
Liu Guanchen is affiliated with the Criminal Investigation School, Southwest University of Political Science and Law, Chongqing, China.











