
At the AI Safety Summit, no one was actually focused on the dangers of AI. It was all PR and power struggles. Vincent Ginis was reminded of the film Don’t Look Up.
The original opinion piece (in Dutch) can be found in De Standaard.
When Don’t Look Up was released, I thought it was a powerful film. My only regret was knowing how often the metaphor would be overused in opinion pieces. And yet, here I am, doing exactly that. In the film, the world ignores an impending catastrophe because economic and political interests take precedence.
This week, the world had its Don’t Look Up moment with artificial intelligence. In the same week that it emerged that Sam Altman, CEO of OpenAI, told President Donald Trump that Artificial General Intelligence (AGI) could arrive within his term, and Dario Amodei, CEO of Anthropic, predicted superintelligence by 2027 at the latest, the AI Safety Summit took place in Paris. Except it wasn’t really about AI safety. The conference followed the film’s script instead: lots of PR, plenty of power struggles, and zero concrete action or consensus.
AI safety was once about risks to humanity—the threat of millions losing their jobs, the dangers of disinformation and large-scale manipulation undermining democracy, the possibility that AI could one day make decisions beyond our understanding or control. But in Paris, the focus was power. Who gets AI? Who controls it? Who stays ahead? The conversation shifted from risks to geopolitics. And that is dangerous.
"Humanity is building a technology that surpasses its own knowledge—and is utterly unprepared for it"
These risks demand cooperation. How do we prevent certain AI systems from being developed? How do we manage the impact when AI makes entire industries redundant? These are not problems that can be solved on a national level. But as AI becomes a strategic asset, the need for collaboration is vanishing. Geopolitical risks, by definition, are about competition.
It doesn’t help that even scientists can’t agree. Some warn about AI being exploited by criminals and authoritarian regimes. Others believe the real problems will begin once AI can act autonomously on an agent level. Another group fears AI simply becoming too intelligent. And then there are the sceptics, who underestimate the capabilities of current models. Each camp was well represented in Paris. That lack of consensus makes it easy for policymakers to pick the narrative that suits them best.
At the UK's AI Safety Summit in 2023, discussions were still about fundamental risks. In Paris, it was about state control. AI companies are now treated as national assets. That means less transparency, weaker regulation, and more backroom deals. Instead of building technology that is safe for humanity, we are building technology that is, in the short term, safe for the states that own it.
What now? Look up. Humanity is developing a technology that surpasses its own knowledge—and is utterly unprepared for it. The first step is simple: name what is happening. The threat is real, the acceleration is dangerous, and the priorities are wrong. The road ahead is long, but we have to start somewhere.*
*This is a machine translation. We apologise for any inaccuracies.