We urgently need to debate the use of facial recognition cameras before they become widespread, argues Rosamunde Van Brakel.

This opinion piece was published in De Standaard.

According to new guidelines from the European Commission on prohibited AI applications, facial recognition cameras are coming. Real-time biometric identification, such as facial recognition, will be permitted when law enforcement is searching for a terrorist—after an attack on a Christmas market or a music festival, for instance—or for someone with an outstanding arrest warrant for drug trafficking. This does not mean that these cameras will be running at all times, but once national legislation is in place, the infrastructure can be widely installed and used under specific conditions.

Policymakers need to think twice. Scientists have shown that real-time facial recognition is far from reliable. The technology produces many false positives, incorrectly identifying people due to flaws in the AI algorithms, which are often trained on datasets that do not accurately represent the population. Research has demonstrated that these errors disproportionately affect women and people with darker skin tones, leading to discriminatory outcomes.

Rosamunde van Brakel

"We have yet to have a serious debate in Belgium about whether and how we want to use facial recognition for security. It’s long overdue"

Moreover, even when a misidentification is obvious, police officers often proceed with interventions regardless. Studies confirm this. In the United States, this has already led to wrongful arrests, with innocent people ending up in jail. In high-stakes cases, such as terrorist investigations, the consequences can be devastating. Take the case of Jean Charles de Menezes, a Brazilian man mistakenly identified as a terrorist and fatally shot by police in London.

There is also the risk of function creep—the gradual expansion of a system beyond its original purpose. Who can guarantee that these cameras won’t be used for unintended purposes? In Belgium, surveillance technologies have already been repurposed without clear oversight. Large-scale surveillance infrastructure could, in the wrong hands, be exploited by politicians with little regard for human rights or democracy. Additionally, such systems are vulnerable to hacking, potentially putting them in the hands of foreign regimes.

The European Union insists that sufficient safeguards will be in place to protect fundamental human rights. But how will governments assess this? Will they go beyond ticking boxes on an abstract rights checklist? Will they consider automation bias and the potential for future misuse?

Belgium has yet to engage in a thorough debate on facial recognition and other AI-based security measures. It’s time we do.*
 

*This is a machine translation. We apologise for any inaccuracies.