Over the last five years, Canada’s federal government has announced a litany of much-needed plans to manage big tech, on issues starting from social media harms, Canadian culture and online news to the right-to-repair of software-connected devices, and artificial intelligence (AI).

As digital governance scholars who’ve just published a book on the transformative social effects of knowledge and digital technologies, we welcome the federal government’s concentrate on these issues.

Difficult conversations

By engaging with the general public and experts in an open setting, governments can “kick the tires” on various ideas and construct a social consensus on these policies, with the aim of making sound, politically stable outcomes. When done well, a superb public consultation can take the mystery out of policy.

For all their plans, the Liberal government’s public-consultation record related to digital policy has been abysmal. Its superficial engagements with the general public and experts alike have undermined essential parts of the policymaking process, while also neglecting their responsibility to lift public awareness and educate the general public on complex, often controversial, technical issues.

Messing up generative AI consultations

The most up-to-date case of a less-than-optimal consultation has to do with Innovation, Science and Economic Development Canada’s (ISED) attempts to stake out a regulatory position on generative AI.

The government apparently began consultations about generative AI in early August, but news about them didn’t develop into public until Aug. 11. The government later confirmed on Aug. 14 that ISED “is conducting a temporary consultation on generative AI with AI experts, including from academia, industry, and civil society on a voluntary code of practice intended for Canadian AI corporations.”

The consultations are slated to shut on Sept. 14.

Holding a brief, unpublicized consultation within the depths of summer is nearly guaranteed to not engage anyone outside of well-funded industry groups. Invitation-only consultations can potentially result in biased policymaking that run the danger of not engaging with all Canadian interests.

Defining the issue

The lack of effective consultation is especially egregious given the novelty and controversy surrounding generative AI, the technology that burst into public consciousness last yr with the revealing of OpenAI’s ChatGPT chatbot.

Limited stakeholder consultations should not appropriate when there exists, as is the case with generative AI, a dramatic lack of consensus regarding its potential advantages and harms.

A loud contingent of engineers claim that they’ve created a brand new type of intelligence, slightly than a robust, pattern-matching autocomplete machine.

Meanwhile, more grounded critics argue that generative AI has the potential to disrupt entire sectors, from education and the creative arts to software coding.



This consultation is happening within the context of an AI-focused bubble-like investment craze, at the same time as a growing variety of experts query its long-term reliability. These experts point to generative AI’s penchant for generating errors (or “hallucinations”) and its negative environmental impact.

Generative AI is poorly understood by policymakers, the general public and experts themselves. Invitation-only consultations should not the solution to set government policy in such an area.

CTV looks on the launch of OpenAI’s ChatGPT app.

Poor track record

Unfortunately, the federal government has developed bad public-consultation habits on digital-policy issues. The government’s 2018 “national consultations on digital and data transformation” were unduly limited to the economic effects of knowledge collection, not its broader social consequences, and problematically excluded governmental use of knowledge.



The generative AI consultation followed the federal government’s broader efforts to manage AI in C-27, The Digital Charter Implementation Act, a bill that academics have sharply critiqued for lacking effective consultation.

Even worse has been the federal government’s nominal consultations toward a web-based harms bill. On July 29, 2021 — again, within the depths of summer — the federal government released a discussion guide that presented Canadians with a legislative agenda, slightly than surveying them concerning the issue and highlighting potential options.

At the time, we argued that the consultations narrowly conceptualized each the issue of online harms attributable to social media corporations and potential remedies.

Neither the proposal nor the faux consultations satisfied anyone, and the federal government withdrew its paper. However, the federal government’s response showed that it had didn’t learn its lesson. Instead of engaging in public consultations, the federal government held a series of “roundtables” with — again — numerous hand-picked representatives of Canadian society.

Fixing mistakes

In 2018, we outlined practical steps the Canadian government could take from Brazil’s very successful digital-consultation process and subsequent implementation of its 2014 Internet Bill of Rights.

First, as Brazil did, the federal government must properly define, or frame, the issue. This is a not straightforward task when it pertains to latest, rapidly evolving technology like generative AI and huge language models. But it’s a crucial step to setting the terms of the talk and educating Canadians.

It’s imperative that we understand how AI operates, where and the way it obtains its data, its accuracy and reliability, and importantly, possible advantages and risks.

Second, the federal government should only propose specific policies once the general public and policymakers have a superb grasp on the difficulty, and once the general public has been canvassed on the advantages and challenges of generative AI. Instead of doing this, the federal government has led with their proposed final result: voluntary regulation.

Crucially, throughout this process, industry organizations that operate these technologies shouldn’t, as they’ve been in these stakeholder consultations, be the first actors shaping the parameters of regulation.

Government regulation is each legitimate and crucial to deal with issues like online harms, data protection and preserving Canadian culture. But the Canadian government’s deliberate hobbling of its consultation processes is hurting its regulatory agenda and its ability to offer Canadians the regulatory framework we’d like.

The federal government needs to have interaction in substantive consultations to assist Canadians understand and regulate artificial intelligence, and the digital sphere basically, in the general public interest.


This article was originally published at theconversation.com