AI governance: What organizations need to know in 2026

Artificial intelligence has moved from pilot projects to core operations for many Canadian organizations. While Canada does not yet have comprehensive federal AI legislation, a patchwork of provincial initiatives, privacy laws and soft-law guidance now shapes how organizations must approach AI governance. This post surveys the current landscape – with a focus on Western Canada – and offers practical steps for compliance.
The federal picture: A work in progress
The Artificial Intelligence and Data Act (AIDA) did not proceed when Parliament was prorogued in January 2025, though its core concepts – risk-based classification, human oversight and accountability – continue to influence Canadian regulatory thinking. In May 2025, Prime Minister Mark Carney appointed Evan Solomon as Canada’s first Minister responsible for Artificial Intelligence and Digital Innovation, signalling that AI policy remains a priority for the Federal Government. An AI Strategy Task Force is now consulting on Canada’s next national AI strategy. In the meantime, federal institutions are already subject to the Treasury Board’s Directive on Automated Decision-Making (which requires algorithmic impact assessments and transparency measures). The assessment tool and assessment results are both available online and useful references for all organizations. Similarly, the Federal Government’s various other resources for the responsible use of AI in government programs and services are also useful tools.
Given that there is no comprehensive federal regulatory framework yet, international developments, a voluntary code of conduct, voluntary standards, regulatory guidance and existing privacy legislation remain the primary guardrails for organizations using AI in Canada.
Western Canada: Provincial developments
In addition to specific regulatory guidance for different sectors, the following provincial developments provide helpful guidance to all organizations:
- British Columbia has a Policy on the use of generative AI and a Digital Code of Practice along with other resources for public sector employees and contractors
- Alberta’s Privacy Commissioner released a report in August 2025 recommending that Alberta create its own AI law and update privacy legislation to address automated decision-making, including an overview of key considerations
- Saskatchewan has guidance for public sector employees, including generative artificial intelligence guidelines
- Manitoba is considering developing a taskforce report on the future of technology, innovation and productivity within the province
Ontario principles: A national model
Ontario has moved furthest on AI-specific rules. Its Enhancing Digital Security and Trust Act (passed in late 2024) sets out accountability requirements for public sector AI use, though regulations are still pending.
In January 2026, Ontario’s Information and Privacy Commissioner and the Ontario Human Rights Commission jointly released six principles for responsible AI use:
The six principles are:
- Accountability – Assign clear responsibility for AI oversight. Keep humans in the loop. Document your decisions.
- Transparency – Be able to explain how your AI works. Tell people when they’re interacting with AI.
- Fairness and non-discrimination – Check for bias and discrimination in your AI systems and data.
- Privacy by design – Build privacy in from the start. Collect only what you need and let people opt out of automated decisions that significantly affect them.
- Human oversight – Monitor AI systems and shut them down if they start producing unexpected or harmful results. Ensure AI produces accurate results and works consistently over time.
- Redress – Provide mechanisms for individuals to challenge AI decisions that affect them.
While aimed at Ontario’s public sector, these principles offer a practical framework for any Canadian organization.
Privacy law: The primary framework
In the absence of dedicated AI legislation, Canada’s privacy laws remain the primary regulatory framework. The Privacy Commissioner of Canada’s joint guidance on generative AI (December 2023) emphasizes consent, transparency and data minimization. Both federal and provincial privacy laws increasingly require transparency when AI materially affects individuals, and organizations should be prepared to explain how AI influences decisions in employment, credit, insurance or service delivery.
Practical steps for organizations
Here are five practical steps your organization should take:
- Conduct an AI inventory – Identify all AI systems in use or under development, including third-party tools, and classify by risk level
- Perform algorithmic impact assessments – For systems affecting individuals’ rights or access to services, use methodologies like the federal Directive on Automated Decision-Making
- Establish governance and policies – Designate accountability for AI systems, develop policies on acceptable use and human oversight and ensure appropriate staff training
- Prioritize transparency – Document how AI systems work and be prepared to explain AI use to affected individuals and regulators
- Leverage standards and guidance – Review and implement voluntary and regulatory standards and guidance relevant to your industry
Looking ahead
Organizations should monitor emerging issues in AI and build robust AI governance programs grounded in privacy principles, transparency and accountability to be well-positioned as the regulatory landscape matures. If you’re thinking about AI adoption or want to review your current practices, our AI and Emerging Technologies team can help you navigate these developments and put practical governance in place.
Note: This article is of a general nature only and is not exhaustive of all possible legal rights or remedies. In addition, laws may change over time and should be interpreted only in the context of particular circumstances such that these materials are not intended to be relied upon or taken as legal advice or opinion. Readers should consult a legal professional for specific advice in any particular situation.




