Introduction
Canada’s national consultations on digital and data transformation present artificial intelligence (AI) as a revolution underway. According to the Honourable Navdeep Bains, minister of Innovation, Science and Economic Development (ISED, 2018), “Today, AI [artificial intelligence] and big data are transforming all industries and sectors. They are presenting new opportunities for innovators to create jobs and generate prosperity” (par. 8). This narrow framing infers that government must adapt to technology changes rather than consider what decisions have encouraged the adoption of AI. More so, this framing overlooks the way the Canadian government is actively shaping policy for AI. This article provides a summary of the Canadian government’s own rapid development of AI governance, curiously absent from the consultation. The findings draw on our active engagement in the process and build on the legacy of Canadian communication policy (Shade, 2008). We hope our experiences interacting with Global Affairs Canada and the Treasury Board help document the beginnings of AI governance in Canada less well known that the major funding announcements as part of Pan-Canadian Artificial Intelligence Strategy.
Artificial intelligence in information and communication technology policy
As a policy issue, AI operates at the intersection of big data and automation. On one side, machine learning—an algorithm that improves through experience, and the kind of AI most talked about in Canada—requires massive amounts of training data to optimize its algorithms, a privacy issue. Once trained, AI requires proper implementation, and should be used only when experts deem it acceptable.
Data and automation are old concerns—matters known to communication, media, and science and technology studies. These intersections are highlighted to stress the present motivations for studying the governance of AI. Machine learning depends on the stabilization of information and data as well as the process of creating data, or datafication (Gabrys, Pritchard, & Barratt, 2016; Halpern, 2014; Sterne, 2012; van Dijck, 2013). It cannot be overlooked that two of the biggest players in machine learning, Google and Facebook, have unprecedented access to everyday communications, what amounts to a massive source of training data. In regards to automation, the promise of AI in replacing or assisting workers resonates with long-standing questions about the discourses that make certain jobs, especially feminized ones, automatable (Gray, 2015; Light, 1999; Taylor, 2018). And both data and automation require renewed attention to the political economies, myths, and cultural logics that orient technologies (AI Now Institute, 2018b; Hicks, 2017; Mosco, 2009).
FAIR, FACT, or FATE? From standard to enforcement
The Canadian government’s experiments with AI are part of a global rush to codify rules and regulations on AI. A few standards have been proposed to govern AI and its underlying data. Many of these standards come from data science and are concerned with ensuring transparent practices and establishing accountable methods of “securing the role of facts in public debate” (Marres, 2018, p. 424). FAIR refers to Findable, Accessible, Interoperable, and Reusable—all guiding principles for data standards developed at the Lorentz Centre in the Netherlands (FORCE11, 2014) Building on FAIR, the Responsible Data Science Initiative has proposed Fairness, Accuracy, Confidentiality, and Transparency (FACT) to address the call to “provide transparency; how to clarify answers in such a way that they become indisputable?” (Kemper & Kolkman, 2018, p. 5).
Regarding the acceptable use of AI, academic and industry discussions have centred on Fairness, Accuracy, Transparency, and Ethics (FATE). In general, these terms question the acceptability of AI—whether its models introduce bias, produce reliable results, and can be understood or explained—as well as suggest what should be the ethical framework of the industry (Barocas, Hood, & Ziewitz, 2013). Ethics has quickly become a solution often pushed by an industry hoping to emphasize the individual choices of developers over the scrutiny of critics calling for more regulation and accountability around the political economy of AI development and other systemic industry factors (Campolo, Sanflippo, Whittaker, & Crawford, 2017).
Neither FAIR, FACT, nor FATE initiatives have led to formal governance institutions or regulation. Overall, standards enforcement around AI remains an open question. Some have suggested enforcement might come from industry self-regulation. Many of the leaders of the AI community in Montréal, particularly at the Université de Montréal and the Institute for Data Valorisation (IVADO), have consulted with citizens and published the Montréal Declaration for the Responsible Development of AI (Declaration of Montréal for a Responsible Development of AI, 2019). The effect of the Montréal Declaration, launched on December 4, 2018, remains to be seen. Not to be outdone, the Toronto Declaration on AI was released at the 2018 RightsCon in Toronto (Access Now, 2018). Self-regulation might come from employees themselves, with labour action becoming more prominent at some of the biggest players in AI, especially Google (Shane, Metz, & Wakabayashi, 2018; Stapleton, Gupta, Whittaker, O’Neil-Hart, Parker, Anderson, & Gaber, 2018).
The Canadian government might be another mechanism to enact these standards. Not, however, through regulation. Despite calls for creating new legislation for AI (Chadwick, 2018) or interpreting existing law (McKelvey, 2018a), there is no legislation pending on AI governance. Instead, inside the Canadian government, two departments have embarked on unusual and promising experiments in developing best practices for AI, crafting policy tools meant for the government to establish industry-wide criteria (Copeland, 2018; Lascoumes & Le Gales, 2007).
Algorithmic impact assessments at the Treasury Board of Canada
The Treasury Board of Canada has been active in drafting policy for the federal government. The process concluded with the publication of the 2019 Directive on Automated Decision-Making (Greenwood, 2019). Work leading to the directive began with a “Digital Disruption White Paper” written in the summer of 2017 (Karlin, 2017, par. 8). The Treasury Board led the process as per its function of setting departmental policy across the federal government. The project lead, Michael Karlin (2017), announced the White Paper on Twitter and Medium on July 4, 2017. The success of the government’s adoption of responsible AI policies remains to be seen, but if approached correctly, Canada’s government could become a model on the national stage for the acceptable use of AI.
The federal government experimented with a highly open consultation process during the development of its AI self-regulation. It invited collaboration on its public GCcollab tool, an external social networking site that was started as a pilot in 2016. The public, but mostly experts, could join its Artificial Intelligence Policy Workspace, where other civil servants shared news and reports (Karlin, 2017). Most public collaboration centred around an open Google document that Karlin published on October 27, 2017. The disruption paper, entitled “Responsible AI in the Government of Canada,” summarized the benefits and risks of AI to the federal government. Through the comment feature of Google Docs, Karlin received feedback from interested members of the public, as well as academics and members of the AI community from across Canada. Karlin also toured a few universities, including Concordia University, to consult with stakeholders. Although the consultation was far from inclusive, the exercise did attempt to look at new ways to engage the public in the policy development process, part of a new emphasis on digital service delivery in the federal government and a promising turn toward new consultation in public policy in general (Bingham, Nabatchi, & O’Leary, 2005).
By March 2018, the report had been translated into an algorithmic impact assessment form for departments to use in their considerations of AI (Karlin, 2018). This tool, modelled after environmental impact assessment, had been popularized only a month prior as a tool for New York City’s task force on “Automated Decision Systems” by AI Now Institute (2018a), a leader in the field, and Nesta in the United Kingdom. The Canadian government tool provides a risk assessment based on:
Impact on individuals and entities
Impact on government institutions
These criteria drew on the report’s final section on “Policy, Ethical, and Legal Considerations of AI,” where it discussed bias and fairness in data, transparency, and accountability, as well as acceptable use.
This tool is just now being used across the federal government due to 2019 Directive on Automated Decision-Making.
1 Now will the tool prevent problematic applications? Already, Petra Molnar and Lex Gill (2018) of the Citizen Lab have questioned the potential risks of using it in immigration policy in lieu of more formal, enforced standards. The first adopter of this tool seems to be Justice Canada, looking to use new “legaltech” to predict rulings in tax cases (Beeby, 2018). Applications to date seem low risk, but it also seems clear that though we might know
how government considers AI, we are not privy to
who engages with it—and whether high-risk situations such as immigration should be considered a no-go zone.
AI and human rights at Global Affairs Canada
As the Treasury Board formulated national AI policy, Global Affairs Canada (GAC) turned to international issues on AI as a pressing matter for its Digital Inclusion Lab to address. Following through on the request to “promot[e] human rights, women’s empowerment and gender equality” (Office of the Prime Minister of Canada, 2017, par. 16) in a mandate letter to the Minister of Foreign Affairs, the GAC sought to understand critical perspectives on AI governance with a particular eye for bias toward race and gender, and with emphasis on labour and human rights paradigms.
By the fall of 2017, the GAC had begun approaching relevant academics and universities to organize a symposium on AI and human rights. Eventually the GAC collaborated with the Canadian Institute for Advanced Research (CIFAR), one of Canada’s leading sponsors of AI research, to run the symposium. The event was one of the first under CIFAR’s AI and Society program and one of the few examples of funding directed at the social impacts of AI in Canada.
Student teams from ten universities researched areas of potential policy intervention for AI regulation, including combating extremist content online, addressing discrimination and bias, and recognizing climate change and refugee rights through case studies. From their findings, the teams delivered policy recommendations in memorandums for action to the minister of foreign affairs, the Office of Human Rights Freedoms and Inclusion, and the Digital Inclusion Lab. Many presentations rebuked the “disruption” framing that saturates the industry, recognizing that AI exists on a long continuum of technological innovation and calling for reflexive governance based out of existing policy frameworks. By foregrounding a rights-based approach, these student policy recommendations rejected the industry habit of framing AI as unprecedented and therefore difficult to govern. By addressing AI applications out of existing rights-based frameworks, students looked past the hype to recognize many more mundane but essential ways AI is already impacting our daily lives.
The symposium represented an important consideration for governments looking to address the lack of racial and gender diversity in AI development and deployment (Silcoff, 2017). Student groups delivering the recommendations included participants of all genders and with diverse personal and disciplinary backgrounds. In order to be innovative and representative, tech has to move beyond the old boys club and include diverse voices. Affirmative action and equitable employment policies are not keeping pace with the speed of AI innovation, so inclusivity must be prioritized as a governance imperative. The policymakers that influence these markets, and consultations such as the GAC symposium, represent well-intentioned opportunities for intervention that value disciplinary and representative diversity.
The outcomes of this symposium remain unclear at this point. The likely immediate outcome seems to be a collaboration with Chief Information Officer Strategy Council to develop ethical AI standards.In a fractured world, these national initiatives might be part of international strategies to set AI standards, especially when delivered under Canada’s tenure as leader of the G7, and alongside work on the Canada-France Statement on Artificial Intelligence signed in June 2018 (Global Affairs Canada, 2018).
Conclusion
This article summarizes important developments around AI governance in the public service and raises questions about the intent of the ISED consultation. This consultation seems a curious artefact of the Liberal government’s present approach to technology policy. While asking for public opinion openly, the government seems to be rapidly formulating policy with ad hoc consultation mechanisms. Such an approach undermines trust in the process as a way to formulate decision-making.
The attempts at inclusion in these experiments do point to a way forward. Standards around AI can and should be approached from a critical perspective that considers development, deployment, and impact from a wide diversity of voices beyond the tech sector. More so, future consultations might problematize the matter of inclusion altogether. In their work on hybrid forums, Michel Callon, Pierre Lascoumes, and Yannick Barthe (2009) question how humans, nonhumans, and uncertainties might work toward democratic decisions. What would a hybrid forum for AI look like? Feminist science studies (Hayasaki, 2017), daemonic media studies (McKelvey, 2018), and most importantly Indigenous epistemology (Lewis, Arista, Kite, & Pechawis, 2018), provide clues. These approaches, often on the outsides of AI governance in Canada, point toward a much more radical project for inclusive consultation.