The Rise of AI Surveillance
by Melissa Heikkilä | Politico EU
Welcome to the dark side of artificial intelligence.
As a technology, AI has been touted as a silver bullet for many of society’s ills. It has the potential to help doctors spot cancers, assist in the design of new vaccines, help predict the weather, provide football teams with insights on their strategies and automate boring tasks like driving or administrative work.
And of course, it can be used for surveillance.
There’s a case to be made that there’s no such thing as AI without surveillance. AI applications rely on mountains of data to train algorithms to recognize patterns and make decisions. Much of it is harvested from consumers without them realizing it. Internet companies track our clicks to divine our preferences for products, news articles or ads. The facial recognition company Clearview AI scrapes images off sites like Facebook and YouTube to train its model. Facebook recently announced it will begin to train AI models with public videos users have uploaded on the platform.
Increasingly, however, algorithms aren’t just being powered by surveillance. They’re being deployed in its service.
The coronavirus pandemic — and the need for rapid data on public health — has opened the door to data collection and tracking on a scale that would have been nearly unimaginable little more than a year ago.
Governments have used mobile phone information to track movement in the European Union. Companies have set up cameras equipped with AI to check if workers and customers are complying with social distancing rules. France has rolled out facial recognition technology in public transport to monitor mask wearing.
“Normalizing biometric surveillance is pretty apparent with [the] COVID-19,” said Fabio Chiusi, a project manager at AlgorithmWatch, an advocacy group monitoring automated decision-making.
The use of AI for surveillance predates the coronavirus, of course. Almost every single European country has some version of facial recognition technology in use. Dutch police use it to match photos of suspects to a criminal database. The London Metropolitan Police uses live facial recognition to match faces to criminals in a database. The French government is a fan of using AI to track “suspicious behavior.”
The rapid adoption of surveillance technologies has raised questions of how they fit into a society’s values. Facial recognition in particular highlights the trade-offs between privacy and legitimate needs to track and trace. Proponents of the technology say it is a powerful tool that can help immigration officials scan travelers at borders, or help the police catch criminals.
Similar algorithms are also being developed for so-called biometric recognition, identifying people based on how they look, sound or walk. There is a “growing impulse within the biometrics fields to identify people’s emotions, and other states based on the external appearance,” said Ella Jakubowska of the digital rights group EDRi, who has campaigned to ban the technology. She added that this use of the technology is “seriously not based in any credible science.”
In the United States, concerns over the technology have prompted some states and cities to draw harder lines. The city of Portland, Oregon, has adopted a blanket ban for the use of facial-recognition technology for its city departments as well as stores, restaurants and hotels. New York City has banned facial recognition in its schools, and activists are calling for the prohibition to be extended to the city’s streets.
Activists have also raised the alarm over the harm these systems can cause to marginalized groups. “It’s pretty clear in the patterns we see across Europe, the use of these systems are entrenching stereotypes and discriminatory ideas about who is more likely to be criminal, and who can’t be trusted, in a way that’s really, really dangerous,” said Jakubowska. Read Full Article >