The legal landscape of generative AI in Canada: Understanding the voluntary code of conduct

Organizations in Canada are increasingly incorporating artificial intelligence (AI), including generative AI, into their operations. This shouldn’t come as a surprise given that the use of AI systems offers a vast array of potential benefits for all types of organizations, such as improving supply chain management, reducing energy consumption, enhancing financial services and improving fraud detection. However, the use of this technology also creates legal and reputational risks, as generative AI is coming under greater scrutiny by regulators and the public.
AI regulatory history
In 2022, Canada started work to establish a formal regulatory framework for AI systems. This began with the introduction of the proposed Artificial Intelligence and Data Act (AIDA) to the House of Commons as part of Bill C-27. Bill C-27 passed the second reading in the House of Commons in April 2023. However, it died on the Order Paper when Parliament was prorogued ahead of the 2025 federal election. As a result, all unfinished legislation, including Bill C-27, was terminated. If the newly elected Parliament wants to pass an iteration of the AIDA, or other AI legislation, it must restart the process by introducing a new bill to the House of Commons.
In September 2023, in the midst of growing popularity and use of generative AI software, the Government of Canada published the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (the Code) as an attempt to provide a bridge to regulation under AI legislation and address and mitigate certain risks associated with the use of generative AI. The Code does not create new legal obligations or alter existing ones. Instead, it offers Canadian organizations common standards to demonstrate responsible generative AI use until formal regulation comes into force.
Overview of the voluntary Code
The voluntary Code has been implemented to mitigate potential negative impacts of generative AI systems, which include risks to health and safety, propagating biases and the potential for broader societal impacts, particularly when utilized by malicious actors. Organizations that develop or deploy generative AI, operate in high-trust sectors or are concerned with protecting their public reputation should consider becoming signatories to the Code. Becoming a signatory to the Code demonstrates a proactive commitment to the responsible development and use of generative AI.
The Code entails measures for developing or managing generative AI systems through ethical methodology selection, collection and processing of datasets, model building and testing. The measures associated with ethical management of operations include controlling the parameters of its operations, controlling access and actively monitoring its operation. The principles upon which the Code has been developed include:
- Accountability – Organizations should understand the responsibilities associated with the ethical development of generative AI systems, manage risks appropriately and share information to prevent gaps in development by other organizations
- Safety – Organizations should ensure that generative AI systems are subject to regular risk assessments and that necessary mitigation measures are in place before deployment
- Fairness and Equity – Organizations should evaluate any impacts on fairness and equity, addressing any that arise throughout system development and deployment
- Transparency – Organizations should publish sufficient information for consumers to make informed decisions and for experts to assess risk management
- Human Oversight and Monitoring – Organizations should monitor the use of generative AI systems after deployment, with updates made as needed to address emerging risks
- Validity and Robustness – Organizations should ensure generative AI systems function as intended, are secure against cyber threats and respond as predicted to expected tasks and situations
Signatories to the Code (who are identified at the bottom of the Code), such as CIBC, CGI and IBM, commit to supporting a strong and responsible AI ecosystem in Canada. Further information concerning the Code and how signatories to the Code are to implement these principles can be found in the Implementation Guide for Managers of Artificial Intelligence Systems.
Purchaser considerations
For technology purchasers concerned with evaluating potential or current technology vendors that utilize generative AI systems, checking to see if vendors are signatories to the Code can serve a valuable function in evaluating their commitment to responsible AI development. In addition, technology vendors may wish to demonstrate their commitment to supporting a responsible AI ecosystem in Canada through becoming signatories to the Code themselves.
If a vendor is not signatory to the Code, the Code may still be utilized in customer assessments of vendors to determine whether they currently operate under acceptable generative AI practices. The leading Technology, Intellectual Property and Privacy team at MLT Aikins will happily assist with guidance on the commitments potential or current technology providers have made with respect to safe technology development, or whether your organization would benefit from becoming a signatory to the Code, including what these commitments equate to in practice.
Our previous articles on the topic of AI can be found here: New Canadian guidance on the use of artificial intelligence, New international guidance for artificial intelligence systems released, Canadian Centre for Cyber Security: Huge concerns with artificial intelligence and Privacy and cybersecurity risks with artificial intelligence.
Note: This article is of a general nature only and is not exhaustive of all possible legal rights or remedies. In addition, laws may change over time and should be interpreted only in the context of particular circumstances such that these materials are not intended to be relied upon or taken as legal advice or opinion. Readers should consult a legal professional for specific advice in any particular situation.