When Covid-19 swept across the world in 2020, the boundaries of Global North, Global South, First World, Third World, rich, poor seemed to blur. The virus moved without a passport. It filled lungs in New York City and Jakarta with the same indifference.
For a brief, almost hopeful moment, dividing humanity into political spheres felt beside the point. Everyone felt pain at the loss of social places, loved ones, and one’s own breath. We were, the story went, all in this together.
We were not. When vaccines arrived, the architecture of global inequality reassembled itself with quiet efficiency. The virus made everyone equal at the level of feeling, but it never translated to a change in global structures of power.
Artificial intelligence is the next test. And we’re already failing in familiar ways. The conversation around AI is saturated with the language of universal benefit: it will accelerate development, close gaps, empower everyone. However, in practice, the gaps are widening.
Those who control AI infrastructure and create frameworks — a handful of American big tech companies — continue to extract value from communities that have no seat at the AI table. The Global South plays a part in the development of AI as a resource rather than a stakeholder.
Happening in multiple cities across the world, the SilkRoad 4.0 Global Future Summit gathers the AI industry to explore how intelligent and sophisticated AI systems can be institutionalized for greater good. If this is a conversation you are interested in, the Summit is all set for 14th April 2026.
Conversations about AI tend to fixate on the visible layer: the chatbots, the models, the tools and use cases. This is a distraction. The more consequential question is not what models are the most capable, but who owns the ground they run on.
AI infrastructure is the stack beneath the software. It is the data centers consuming the electricity of small nations. It is the cloud platforms — Amazon Web Services, Microsoft Azure, Google Cloud — that together control roughly two-thirds of the global cloud market.
It is the semiconductors, primarily NVIDIA’s GPUs, without which large-scale AI training is simply not possible, and which the United States has placed under sweeping export controls since 2022. It is the architecture of who can access compute, at what price, on whose terms.
This infrastructure is concentrated geographically, corporately, and politically in the US. Countries without domestic cloud infrastructure, without semiconductor supply chains, without the capital to build either, do not simply lag in AI development. They participate in AI on terms set entirely by others, under governance frameworks they cannot influence.
Infrastructure determines access. Governance determines whose interests the infrastructure serves. And on that front, the exclusion of the Global South is structural.
Pick your closest city, join locally, and connect globally via the shared program and live links. SilkRoad 4.0 Global Future Summit is accessible and is a platform for actors who want to put in place well-regulated AI systems in a cross-border context.
The decisions being made about data rights, liability, and what counts as harmful AI will shape the technology for decades. The Global South has learned this lesson before, in trade negotiations, in climate finance, in intellectual property regimes. It is watching the same pattern repeat.
Meanwhile, Global South populations are among AI’s primary raw material suppliers. The text, images, and behavioral data generated by billions of people across Africa, South Asia, and Latin America have fed the training pipelines of the world’s most powerful AI systems, without consent, compensation, or representation in decisions about what those systems are built to do. It is contribution without credit.
At the bottom of the stack, the human feedback and data labelling that make AI systems deployable is overwhelmingly performed by workers in Kenya, the Philippines, and India — cheap labor, exposed to psychologically damaging content, with no meaningful support. These are the people building the guardrails of AI, and they are its least protected participants.
Read More: AI and Meritocracy: How Value Creation Is Being Redefined in Workplaces
In this environment, Europe’s digital sovereignty drive may seem like a model. Regulate, extract compliance, create your own frameworks. However, Europe isn’t exactly nailing this. GDPR forced US companies to take data protection seriously, at least on paper. American tech giants built data centers in Frankfurt and Dublin, paid fines.
On the surface, this looks like sovereignty in action. It is not. Compliance is not independence. The expertise, the model weights, the underlying architecture — none of that moved to Europe. The reins stayed in American hands. Europe only achieved a licensing arrangement dressed in the language of sovereignty.
More fundamentally, Europe’s leverage rested on conditions the Global South cannot replicate: a GDP large enough to make non-compliance commercially unthinkable, and a transatlantic political relationship in which US companies could be persuaded to perform European values.
The challenge? The challenge is to institutionalize AI in a way that this unprecedented feat benefit all. Global Future Summit aligns decision-makers across 20+ locations and 8 time zones to translate high-impact themes into concrete cross-border initiatives.
That last condition is now visibly eroding. The EU’s model is premised on a cooperative, values-compatible United States. It’s a bet that a friendlier America will eventually return after Trump. This is hope, not strategy.
I believe the place to start is to understand and accept the underlying structural reality of how AI is being developed and for whom.
How the technology is being implemented is not a temporary arrangement pending a more equitable settlement; it will shape the international power structure for decades.
Accepting that reality is the precondition for any response worth taking seriously. International institutions — the UN, the ITU, the multilateral bodies that exist precisely to represent the less powerful — must treat AI governance as the sovereignty question it already is.
The window for structural intervention is open now, while the architecture is still being built. It will not stay open indefinitely.
Hassan Ahmed is an IR graduate with research and writing experience in IR theory and great power politics. He previously worked as a research fellow at IPDS and wrote for The Diplomatic Insight. His published works include "Reimagining US-Pakistan Ties" and a Book Review of "War Without Winners."
- Hassan Ahmed
- Hassan Ahmed












