Is life online safe?
The highly anticipated Online Harms Bill takes significant steps in regulating online risks, but not all gaps have been filled to protect us.
By: Nina Hernandez Jayme
The online world is consumed with harm – and Canada is finally taking the necessary steps to do something about it.
The highly awaited Online Harms Bill (Bill C-63) was tabled last Monday, introducing seven categories of harmful content to be regulated and launching the creation of three regulatory bodies devoted to protecting users from online harm. The bill was met with the usual scrutiny and criticism that follows any new piece of legislation. However, it should be seen as a foundational step towards online regulation rather than a panacea for everything that is wrong with the internet.
The Online Harms Act is a significant first attempt by the Canadian government to deal with the spillover from Pandora’s box. Regulating the online space is as complex as the diverse content it hosts. Protecting users from harm while preserving the freedom of expression represents a delicate challenge for policymakers.
The bill delimits seven online categories of harmful content that targets children, incites violence or terrorism, as well as promoting hatred and the non-consensual sharing of intimate images. These categories alone do not encapsulate all the harms and threats present in the online space. Nonetheless, defining these categories serves to create a crucial framework that addresses some of the most pressing issues in the online space.
Canada has strategically incorporated insights learned from other regulatory frameworks around the world, such as the Digital Services Act (DSA) in the European Union, the Online Safety Act (OSA) in the United Kingdom, and both the OSA and DSA in Australia. As noted by Taylor Owen, Director of The Centre for Media, Technology, and Democracy, "It's very clear they drew on successful aspects of regulations from various countries to build this framework".
Fortunately, the bill achieves some clear victories. For example, the specific requirement to label automated content, such as artificial intelligence succeeds at promoting transparency and avoiding potential manipulation to deceive individuals.
Another feature of the bill addresses the regulatory fines in the tech world which so far amount to just a minor inconvenience for wealthy technology companies. This bill considerably strengthens monetary penalties, allowing for fines of up to eight per cent of the company's global revenue. The bill also forces social media platforms and livestream services to swiftly remove harmful content as outlined in the bill within 24 hours.
While the bill addresses critical issues, concerns have been raised about the broad scope of other measures such as updating the criminal code to include a definition of hate speech and introducing new penalties for violence stemming from it.
Vivek Krishnamurthy, Associate Law Professor at the University of Colorado and Director of the Samuelson-Glushko Technology Law and Policy Clinic, raises concerns about the exceptionally broad measure of this provision: “this is where the criminal code and the human rights amendments become politically problematic.” Krishnamurthy suggests reconsidering this provision, since this section was repealed 12 years ago by the Stephen Harper government claiming that it suppressed free speech.
The legislation, while comprehensive, excludes certain types of content such as messaging services like WhatsApp and chat functions within gaming platforms. These venues are commonly used for recruitment into harmful groups.
Amarnath Amarasingam, Assistant Professor in the School of Religion and Department of Political Studies at Queen’s University, noted that “these chat platforms are significant conduits not only for far-right content but also for disseminating Hindu nationalist, anti-Muslim, and antisemitic content — areas currently untouched by existing legislation.” This omission might be strategic to ensure the bill is quickly passed.
Some other areas outside the scope of this legislation are IP blocking, gaming, search engines, modifications to the algorithm as well as the definitions of mis- and disinformation.
The bill also creates The Digital Safety Commission, a Digital Safety Ombudsperson, and The Digital Safety Office. However, it is not clear how these new bureaucracies will interact with other players in the regulatory space like the Canadian Radio-Television and Telecommunications Commission (CRTC), Privacy Commissioner, or the Canadian Human Rights Commission (CHRC). Another concern is the oversight of this new commission, questioning who will be monitoring its activities to ensure accountability.
Similarly, the lack of clear delineation of responsibilities and functions among these new entities raises concerns as the public might find it difficult to determine the right office to seek information or help. As expressed by Heidi Tworek, Canada Research Chair, and Director of the Centre for the Study of Democratic Institutions, “this is a bill that affects everyone that is going online, and seeing how we make the complexities of this clear is going to be a real challenge.”
The government as well as social media companies play a crucial role during this time and should devote space to educate their audiences about their rights and the implications of this legislation. This bill is an opportunity to not only incentivize debate but to create more responsible and cognizant consumers to establish a safer online space.
Despite these limitations, the Online Harms bill marks a significant step forward for Canada to join global efforts to regulate one of the most connected spaces in the world with 34.47 million users in Canada.
Thus far, the debate has proven to be political but also policy-oriented, but the public must remain vigilant to prevent it from becoming exclusively political. As the debate continues, it is now up to the companies to react to the robust safety measures while the public observes from behind the screen.
** This article references the initial reactions of The Online Harms Bill by five members of the Expert Advisory Group on Online Safety, as broadcasted by The Centre for Media, Technology and Democracy. Watch the panel here.
Nina Hernandez is from Torreon, Coahuila, Mexico, and holds a double major in Government and Rhetoric and Writing from The University of Texas at Austin. She has experience working in the private and public sectors in both the United States and Mexico. Her passions lie in development, digitalization, and multilateral alliances with international organizations. With a mission to create strong alliances, she has worked as a political consultant in Washington, D.C. for international clients in the fields of engineering, pharmaceuticals, and technology. She has also been involved in outreach efforts for urban policies at the local urban planning department in her state. For her most recent job, Nina was a subdirector for the federal health regulatory agency in Mexico, where she was closely involved with authorizations, permits, and international equivalence agreements for the release of vaccines during the COVID-19 pandemic. In her free time, she enjoys doing pottery and hiking.