The growing use of artificial intelligence (AI) within organizations in Canada is bringing to light the need to balance the utilization of AI’s innovative capabilities with stakeholder expectations for responsible AI use. At the same time, with AI’s continued growth and use, government bodies, regulators and standards organizations are attempting to establish legislation and voluntary codes for organizations using and developing AI.  

In a previous insight, we discussed the Canadian government’s Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. Now, in this companion article, we’ll be highlighting some of the other voluntary standards aimed at assisting organizations to implement AI responsibly, mainly ISO/IEC 42001:2023 and NIST’s AI Risk Management Framework.  

Overview of ISO/IEC 42001:2023 

The International Organization for Standardization’s (ISO) ISO/IEC 42001 is an international standard that helps organizations of any size, including non-profits, establish a structured framework for governing AI projects, AI models and data governance practices. Rather than looking at a specific AI application, it manages AI-related risks and opportunities for a variety of applications that may be used across an entire organization, providing value for any business. The standard follows a structured plan-do-check-act system, which helps organizations implement, monitor, improve and adapt their AI models.  

The main pillars of the ISO/IEC 42001 framework include:  

  • Responsible AI – organizations must ensure ethical and responsible use of AI by employing robust governance systems and regulations 
  • Reputation management – enhances public and stakeholder trust in AI applications through transparency, fairness and accountability 
  • AI governance – supports compliance with legal and regulatory standards, clearly outlining roles and responsibilities in every organization 
  • Practical guidance – identifies and manages AI-related risks effectively, including bias, accountability, transparency and data protection 
  • Identifying opportunity – encourages innovation and growth within a structured framework, including AI performance evaluations, internal audits and management review 

ISO also has other AI-related standards including, but not limited to:  

  • 23053 – AI and machine learning framework for describing generic AI systems 
  • 23894 – provides guidance on AI-related risk management for organizations 
  • 5339 – guidance for AI applications 
  • 24027 – address bias in AI systems and AI-aided decision making 

The ISO 42001 certification indicates that an independent third party has confirmed the completion of the necessary framework and governance tools an organization needs to effectively manage risks and opportunities associated with the use and development of AI. This provides stakeholders with additional assurance that your organization is committed to responsible AI use and takes data privacy seriously.  

NIST AI Risk Management Framework  

The National Institute of Standards and Technology’s (NIST) AI Risk Management Framework is a resource for organizations designing, developing, deploying or actively using AI systems to manage risks of AI and promote trustworthy and responsible development of AI systems. The framework is a voluntary system for organizations of all sizes and in all sectors looking to implement effective AI governance.  

The framework is divided into two parts:  

Part 1 outlines AI risks and how to ensure stakeholder trust through safe, transparent and fair AI use, and Part 2 outlines the systems that organizations can use to control the risks outlined in Part 1, which is described in the four main stages below: 

  1. Map – context is established and risks related to the context are identified considering the intended purpose, beneficial uses, laws, norms and expectations 
  2. Measure – identified risks are assessed, analyzed and tracked through appropriate methods and metrics 
  3. Manage – AI risks based on assessments and other analytical output from the Map and Measure functions are prioritized, responded to and managed 
  4. Govern – policies, procedures and practices across the organization related to the mapping, measuring and managing of AI risks are in place, transparent and implemented effectively 

Many public sector entities in Canada, such as the federal government, require contractors to be NIST-compliant. Organizations who deal with federal and provincial governments may therefore want to consider bringing their AI use in line with the NIST Framework.  

Considerations for organizations 

Organizations who use AI in any part of their business should consider following one or more of the voluntary standards or codes that exist today, particularly in light of the absence of legislation on the matter. Following a voluntary standard or code may also give the opportunity to improve businesses and the outcomes generated from AI systems, creating more value for your organization. You also improve risk management, accountability and stakeholder confidence in your organization and may better position your organization for the inevitable implementation of legislation and regulation on the use of AI.  

Note: This article is of a general nature only and is not exhaustive of all possible legal rights or remedies. In addition, laws may change over time and should be interpreted only in the context of particular circumstances such that these materials are not intended to be relied upon or taken as legal advice or opinion. Readers should consult a legal professional for specific advice in any particular situation. 

Share