Mass surveillance and manipulation by powerful AI algorithms represent a much more imminent and tangible threat to our democratic values than killer robots.
The scientific breakthroughs of pioneer artificial intelligence researchers in Toronto, Montreal and Edmonton are fuelling record public and private investments seeking to turn Canada into a global AI powerhouse.
Federal and provincial governments awarded more than $400 million to various R&D initiatives in this field over the past two years alone, while companies such as Microsoft, Google, or Facebook are establishing their own dedicated labs in Canada to make sure they’re not left behind in the AI arms race.
This technology will disrupt every aspect of our economy, and the hope is that Canada will yield the rewards of its scientific foresight by developing a thriving innovation ecosystem. But the disruption will also be social and political.
Beyond the waves of job destruction that AI will precipitate in many industries, the main concern regarding the darker side of this technology has focused on the development of killer robots. Elon Musk and hundreds of high profile AI researchers have voiced their alarm and called politicians to action.
The fear of AI-enabled armed robots resonates particularly well with Western audiences that have been fed a rich cultural diet of malicious machines threatening to exterminate humanity by the Hollywood film industry.
But instead of worrying about what AI and the robots it controls will do to us, we should be more concerned about what it will know about us and what it will make us do. In other words, mass surveillance and manipulation by powerful AIs represent a much more imminent and tangible threat to our democratic values than killer robots.
The disruptive potential of AI has a strategic dimension that has not escaped authoritarian regimes. Vladimir Putin framed it with his legendary sense of nuance when he said “whoever will become the leader in this field will become the ruler of the world.”
With much less fanfare, the Chinese government has come to the same conclusions. It has set aside $150 billion USD for AI in its most recent five-year plan to become the world leader in this field by 2030, with massive additional investments by local governments and private companies.
Not all this money will be used to bolster eCommerce through more effective purchase recommendations and personable chatbots. The University of Toronto’s Citizen Lab has, for example, examined how algorithms embedded in popular Chinese social media apps perform censorship and surveillance functions.
Indeed, one of the main security applications of AI in authoritarian regimes involves the mass surveillance of populations that threaten the stability of the political system and its institutions. China seems the most advanced in this respect with a plan to develop a social credit scoring system to be rolled out by 2020.
This social control tool is already being tested by several municipalities and internet companies. It will assess people based on a pool of online, administrative and banking records. Powerful AI algorithms will be used to assign them a unique trustworthiness rating that will influence what kind of government services (housing, education, health, employment, etc.) and commercial services (bank loans, insurance premiums, travel abroad, etc.) they will be able to access.
The AIs that will parse this ocean of data (opting out, by the way, is not an option) will become all-seeing gods extracting compliance by their capacity to classify behaviours and sort people that diverge from the politically acceptable norm.
Western democracies are not immune from this disturbing trend. In the U.S., the Department of Immigration and Customs Enforcement recently asked technology firms to develop algorithms that could assess the risks posed by visa holders through the continuous analysis of their social media activities during their stay in the country.
Canada is playing an instrumental role in bringing the power of AI to every corner of human life. Instead of limiting its leadership to the research and innovation fields, it should also extend it to the regulatory and diplomatic arenas, to ensure that AI applications are not used for anti-democratic purposes but serve the public good instead.
That would imply preventing Canadian AI technology from being exported to authoritarian states, but also thinking about how Canadian citizens can be protected from the surveillance of companies that operate from undemocratic states and therefore share data with their own government.
On the international stage, Canada should play a more active role in shaping international conventions that would restrain the weaponization of AI and encourage applications that enhance human well-being. Our country’s moral imperative is to guarantee that AI technologies will not erode the privacy ideals and principles that define our democracy.
Benoît Dupont is a professor of criminology and Canada Research Chair in Cybersecurity at Université de Montréal.
“Voices of the College” is a series of written interventions from Members of the College of New Scholars. The articles provide timely looks at matters of importance to Canadians, expressed by the emerging generation of Canada’s academic leadership. Opinions presented are those of the author(s), and do not necessarily reflect the views of the College of New Scholars nor the Royal Society of Canada.