‘Digitize Me!’ is the name of the brand-new festival hosted by Antwerp’s arts centre De Singel (from 22 to 27 April). The big question? What will all those ‘digitised selves’ mean for democracy? Will artificial intelligence push us even deeper into our social bubbles? Or could AI chatbots bring us closer together? On the opening day of the festival, VUB alumna and AI pioneer Pattie Maes – now at MIT in the US – will debate the issue with author and VUB researcher Paola Verhaert. Here's a double interview to set the stage.

Artificial intelligence has already become a fixture in our daily lives and work. And according to Pattie Maes, this is only the beginning. She started her academic journey at VUB and has been conducting groundbreaking research into AI since 1989 at the world-renowned Massachusetts Institute of Technology (MIT).

Pattie Maes: “AI is becoming the interface between us, other people, and the world of information. The shift from mass media to social media has already led to more polarisation and social fragmentation. More and more people end up in echo chambers. AI will only amplify that. Chatbots will tell you what you want to hear – not what you need to hear when you're after the truth. Search engines like Google are being replaced by AI systems that spit out pre-packaged answers instead of letting you do the digging yourself.”

Is that dangerous?

Pattie Maes: “Yes, it is. AI systems still make a lot of mistakes. So the information they give can be wrong. On top of that, AI is biased. When you use a search engine, you read the articles and form your own summary. AI does that for you – but with a slant. That bias can creep in unconsciously or be baked in deliberately.”
 

“We need to get better at asking critical questions. What role does AI play in my life, my studies, my job? Do I have something to lose here? And is that a risk I’m willing to take?”
Pattie Maes

Pattie Maes

But AI is often presented as neutral.

Pattie: “Exactly – and that’s the problem. Studies show that people don’t realise this. They’re unaware of how much AI influences them. A colleague of mine showed this in research: if people use a biased AI model every day – for instance, one that downplays climate change – they gradually adopt that bias. They don’t even notice their views are shifting. We’re still digitally illiterate when it comes to AI. There’s a lot of work to do.”

Paola Verhaert: “Especially around critical literacy. Knowing how to use AI is one thing, but you also need to ask yourself tough questions. What is AI’s role in my life, my studies, my work? Could I be harmed by it? And is that something I want to accept?”

Does AI pose a threat to democracy?

Paola: “It’s a huge issue. Almost all the popular AI applications are developed by companies. They’re not building them to benefit society – they’re doing it to make money. And that can clash with democratic values. Meanwhile, the infrastructure they’ve built has become essential to our daily lives. So of course citizens should have a say – along with government – in how that infrastructure is run. Think of it like the electricity grid or the railway system. We’ve built checks and balances there. Citizens and governments decide together what’s acceptable and what’s not.”
 

“You read the paper – but the paper doesn’t read you. Social media and AI do. And they serve you more and more of the same, only more extreme”
 


Is this just a European take, or are people in the US also worried?

Pattie: “Absolutely. Especially now, with Trump back in the spotlight, the fear is real. Until recently, AI companies only cared about making money. But now governments are stepping in – because they know this tech is extremely powerful.”

But isn’t that how it’s always been? Newspapers make money too, and they can also be used to manipulate.

Pattie: “Yes, but with AI it’s more subtle. AI is expensive. These companies need a constant flow of money to train new models. The Trump administration is now pouring billions into OpenAI. No one will be surprised if they expect something in return.

There’s another key difference: you read the newspaper – but it doesn’t read you. Social media and AI do. And then they feed you more of the same, but slightly more extreme each time. Because that’s what keeps you scrolling. That’s what generates ad revenue. The better they know you, the more effective that becomes. That’s why Sam Altman recently announced that ChatGPT will start keeping track of everything you do in your sessions.”

To influence you politically?

Pattie: “Could be. But also to sell you stuff. And they’ll do it in subtle ways. If you ask a chatbot how to prepare for a job interview, it might suggest certain clothes – and casually drop a specific brand. And surprise, surprise: that brand paid to be mentioned. But you won’t even realise it’s an ad.”
 

“Global tech policy seems stuck in a split: Big Tech in the US versus Big State in China. But with a bit of political imagination, we could create far better options”

Artificial Intelligence: Europe stands by and watches

Pattie: “It’s not too late to create Belgian or European AI models. Mistral AI in France is a great example – they’re making real progress.”

The EU is trying to make AI safe and ethical through regulation. But critics – especially in the US, and sometimes in Europe too – claim the approach is too protectionist and stifles innovation.

Pattie: “In the US, the go-to argument is always: we can’t regulate AI, or we’ll lose the race to China. But plenty of Americans don’t buy that anymore.”

Paola: “It’s a myth that regulation kills innovation. In fact, the opposite is true. Research shows that GDPR rules actually gave a boost to technical innovation. And not just any innovation – the kind that supports democratic values, like transparency and respect for human rights. Innovation isn’t just about making things faster or more efficient. Right now, global tech policy seems to be split between two extremes: Big Tech in the US, and Big State in China. But with a little political imagination, we could come up with far better alternatives. I really hope democracies in Europe, Asia and Latin America will start investing in that kind of innovation policy.”

Paola Verhaert

Paola Verhaert

Is Europe the right level to tackle this?

Paola: “National politicians are a bit too quick to push the AI debate off to Europe, if you ask me. But the EU was primarily set up as an economic integration project. It still has limited powers when it comes to social policy or education. That means every member state has to seriously consider for itself how AI will impact its own society.”

Pattie: “AI could have a huge effect on education in particular. We need to think about how we protect children and young people. At that age, they form habits they’ll carry with them for life.”

It’s starting to feel like one big social experiment. And no one really knows where it’s going.

Pattie: “Not even the AI companies themselves. They don’t actually know how their own systems come up with answers.”

AI can also be used for good. In one study, conspiracy theorists who chatted with an AI bot came away 20% less convinced of their views.

Pattie: “That’s from research done by colleagues at MIT. But in that case, the chatbot had been trained on specific conspiracy theories – like COVID-19 or 9/11. The catch is: you could use the exact same technology to convince people of conspiracy theories. It all depends on how you train the bot.”

AI is already being used to support democracy on a larger scale. In Taiwan, citizens debate education budgets, climate policy, and social benefits online. An AI tool then analyses the discussions and identifies common ground.

Paola: “I’m quite critical of that kind of approach. If you just throw everyone’s opinions into a system and wait to see what middle ground pops out, you’re missing the point of democratic debate.”

So what is the point of democratic debate?

Paola: “It’s about going through a process together – listening to each other patiently and having a proper dialogue. It’s okay if you don’t fully agree at the end. People have different views and world perspectives. That doesn’t mean you can’t engage respectfully and reach some kind of agreement. I’m a big fan of the Belgian philosopher Chantal Mouffe and her theory of agonism. She says conflict shouldn’t scare us – we should embrace it, as long as we respect each other’s rights and dignity. That’s why I’m sceptical of AI systems that just push us towards consensus. That’s not enough.”
 

"What we actually need are AI systems that challenge our views and show us different ways of looking at the world. That’s how we grow as people"
 


Pattie: “Chatbots reflect our own thoughts back to us – they act like an echo. But we grow by facing disagreement and conflict. You only learn something new when you see how someone else experiences the world. What we actually need are AI systems that challenge our views and make us more aware of other life perspectives. That’s how we grow and develop as human beings.”

Paola: “That would be beautiful.”

Speaking of conflict – how do you see the clash between Trump and American universities?

Pattie: “I was really glad that Harvard refused to give in (this interview took place on 15 April 2025, a day after Harvard University declined the demands of the Trump administration, ed.). They’re risking $2.2 billion in funding – possibly even more in the long run. Lay-offs are probably coming. But I think it’s brave. Hopefully other universities will follow their lead. Many are already drawing up plans to take legal action together.”

Paola: “What Harvard’s doing is hopeful, indeed. Universities are wealthy and powerful institutions. If you have that kind of power, you should use it for good. Especially when it comes to protecting your students – students who are being arrested, detained, and possibly deported just for standing up for Palestinians. Philosopher Sofie Avery recently wrote an opinion piece about a case of misconduct by a professor at a Flemish university. She called for ‘institutional courage’ – the idea that a university must show clearly what it stands for, and how it puts its values into practice. That courage is needed not just in isolated cases, but across the board. Including when students, researchers, or professors are being targeted for their political beliefs. That’s what I’m hoping for: courage.”*

Bio Pattie Maes

Pattie Maes studied computer science and earned her PhD at VUB in 1987. Two years later, she moved to the United States, where she has been researching and teaching at the prestigious Massachusetts Institute of Technology (MIT) ever since. Her work focuses on human-computer interaction and artificial intelligence. She is internationally recognised as a pioneer in her field.

Bio Paola Verhaert

Paola Verhaert studied contemporary history and European policy at VUB, and digital humanities at KU Leuven. She is a writer and researcher at imec-SMIT (VUB), working at the intersection of social justice and technology. In 2024, she was named a ‘Scherpsteller’ by deBuren and the Hannah Arendt Institute – a title awarded every two years to an inspiring emerging thinker.

*This is a machine translation. We apologise for any inaccuracies.