Last week the Council of Europe held an exceptionally strong conference in Helsinki: Governing the Game Changer – Impacts of artificial intelligence development on human rights, democracy and the rule of law. From start to finish, the Council put together a set of great panels with expert presenters from across Europe. You can find video of the event here.
I participated in the scene-setting exercise that launched the substantive discussions, following Joanna Bryson, a professor at the University of Bath in the UK, and Dunja Mijatovic, the Human Rights Commissioner in the CoE and former Representative on Freedom of the Media in the OSCE. My prepared remarks follow here (as usual, check against delivery):
“Thank you for the invitation to participate in this remarkable and deeply substantive conference. It is a particular honor to follow my dear friend Dunja Mijatovic, one of our great protagonists in the fight to ensure that human rights play a leading role in shaping our digital information environment.
The role of the Council of Europe cannot be overstated, and I want first and foremost to encourage the Council to continue its critical work in this space. No other inter-governmental institution globally, at its scale, is doing the work that the Council is doing. You are a normative and policy leader in this space.
I am here, along with Joanna Bryson and Dunja, to provide some “scene setting” for AI, in my case AI and the information environment. I think it is appropriate for us not to imagine AI solely as a nefarious great disrupter, transitioning us somehow from a period of individual autonomy and agency to one in which we are subject to the whims of governments on the one side and, as one European official once referred to them to me, the “profit-making beasts” of Big Tech on the other. AI, like all technologies, may be a tool for good or a tool for ill. It could be a tool that bolsters, reaffirms human agency, that expands our ability to exercise our rights to freedom of opinion and freedom of expression. It could be a tool, or set of tools, that works to protect us from the discrimination, profiling, and censorship that the digital age often, if not actually implemented, hangs like a sword of Damocles over our heads.
The ground truth about AI is something Joanna already made clear – that it is a human creation. As such it must be bound by rules of human rights and accountability.
I want to make three general scene-setting points: first, about the role of AI in the information environment; second, about the role of human rights obligations in framing and shaping AI; and third, the role of smart regulation moving forward.
First, AI itself. Of course, today, algorithms and AI applications are found in every corner of the internet, on digital devices and in technical systems, in search engines, social media platforms, messaging applications, and public information mechanisms. They implicate content display and personalization, content moderation and removal, and profiling, advertising and targeting.
The enormous volume of data in contemporary life and the capacity to analyze it fuel AI. The vastness of data before us and all around us has facilitated the drawing of a cloak of opacity around the tools used to sort that data. And in the information environment, that is particularly problematic. It is in part for that reason that AI technologies – whether adopted by public or private actors – must be as transparent as possible so that, as they take greater roles in our information environment, we do not lose control of fundamental rights to the machines and the humans who program them.
So, second, let me turn to three of the rights at issue in the information environment. One that gets short shrift in the jurisprudence, but not in this room, is the freedom of opinion. Let’s be honest: Content curation has long informed the individual’s capacity to form opinions: media outlets elevate particular stories to the front page with the intention to shape and influence individual knowledge about significant news of the day. Commercial advertising has also sought to induce favorable opinions of and cultivate desire for particular products and services.
But the intersection of AI and content curation raises novel questions about the types of coercion, manipulation or inducement that may be considered an interference with the right to form an opinion. Are we losing some autonomous capacity to maintain and form opinions as unaccountable companies or state-driven enterprises choose, quietly, opaquely, what we see, read, and ultimately know? And how do we ensure against that ?
A second right, of course, is the freedom of expression, the right to seek, receive and impart information and ideas of all kinds, regardless of frontiers. AI, such as we see in computational propaganda, can be a tool to limit free expression. A lack of clarity about the extent and scope of AI and algorithmic applications online prevents individuals from understanding when and according to what metric information is disseminated, restricted or targeted. Small concessions to addressing this problem such as selective identification of sponsored search results, or social media platforms highlighting when advertising is paid for by political actors, may slightly contribute to helping users understand the rules of the information environment, but these neither capture nor resolve the concerns around the scale at which algorithmic processes are shaping that environment.
Hovering over AI, moreover, is the pervasive human right to be free from discrimination. Here is a fundamental problem: AI is a bias technology. Its fuel is discrimination. More than anything else, it is a tool to discriminate among possible choices and outcomes. Whether in the hands of private or public actors, AI threatens to replicate bias and discrimination, a feature that has very troubling implications for the fight against hate, misogyny, racism, and other forms of repression within the information environment.
A human rights framework – not only an ethical or self-regulating one but one rooted in law – is essential to our approach to these threats. At the heart of that framework can be the foundational rules of human rights law, those found in the International Covenant on Civil and Political Rights and the European Convention on Human Rights, and the UNGP.
Third, a few words about regulation. Shortly after the election of Donald Trump in the United States, in a moment of panic, the New York Times called for some kind of software to ‘zap bogus stories’. They were thinking AI, whether they said it or not. And AI – algorithmic filters of what’s false, or troubling, or dangerous – seems like a strong weapon. It is. And its use could lead us down troubling paths.
Here we have to talk about Brussels. I am particularly concerned that here in Europe regulation is already moving in problematic directions. We ought to be honest about one of the core elements in the big European debates over the regulation of content. The debates over terrorist content and hate speech online are in part about the push for algorithmic tools – AI – to take down “illicit” content – whatever that may be – at the point of upload. The companies fuel confidence in these tools.
This is also behind Article 13 of the draft Copyright Directive, a dangerous and unfortunate approach to the genuine problem of copyright protection, a major threat to a free and open internet. The push for upload filters, whether express or implied, is intense.
I want to be clear: upload filters, pressure for instant removal, provide a template for the use of AI in deeply problematic ways. It is not only about terrorist content, or hate speech, or disinformation, or copyright. It is about the way we deploy these technologies and how they may interfere with fundamental human rights and shape the information environment.
Finally, I want to conclude with a few and non-exhaustive set of proposed elements of a human rights-based approach to AI:
- Autonomy of the individual must be our guiding principle, underlining public and private regulation of AI.
- AI must not invisibly supplant, manipulate or interfere with an individual’s ability to form and hold their opinions or access and express ideas in the information environment. Respecting individual autonomy means, at the very least, ensuring users have knowledge, choice and control. Pervasive and hidden AI applications which obscure the processes of content display, personalisation, moderation and profiling and targeting undermine the individual’s ability to exercise the rights to freedom of opinion, expression and privacy.
- Companies deploying the UN Guiding Principles on business and human rights should orient their standards, rules and system-design around universal human rights principles.
- But company self-regulation alone will not be tolerable over the long term.
- The development of codes of ethics and accompanying institutional structures may be an important complement to, but not a substitute for, legally binding commitments to human rights. State actors must encourage this framework while avoiding heavy handed regulation that leads to censorship and discrimination.
- Embracing radical transparency throughout the AI lifecycle requires companies and governments to take steps to permit systems to be scrutinized and challenged from conception to implementation.
- State-imposed disclosure requirements may be an appropriate means to protect notice and consent.
- Remedy: Adverse impacts of AI systems on human rights must be remediable and remedied by the companies responsible. The pre-condition to the establishment of effective remedy processes is ensuring individuals know they have been subject to an algorithmic decision (including one that is suggested by an AI system and approved by a human interlocutor) and are equipped with information about the logic behind that decision.
- But accountability also means a role for public institutions. We have failed so far in creating a template for courts and other independent institutions to identify and remedy rights violations. That has to change.
This is a powerful group of people who can help shape the debate over AI and human rights. I am glad to be a part of it and thank you very much for your work – and your attention.
Thank you.”