Negotiating technology agreements in the age of AI: Key considerations for Canadian organizations

Artificial intelligence has moved rapidly from a novelty to an operational reality, with organizations across Canada using it for document automation, customer service chatbots, predictive analytics and cybersecurity monitoring. But while the technology has evolved dramatically, the contracts that govern these vendor relationships have not always kept pace. Traditional technology agreements often fail to address the distinct risks that AI introduces, and organizations that rely on standard-form vendor terms may find themselves exposed in ways they did not anticipate.
This Insight builds on themes explored in our previous articles on topics such as managing privacy and cybersecurity risks with vendors, AI note-taking tools and e-commerce terms and conditions by examining the contractual provisions for AI procurement. We also highlight the Canadian Centre for Cyber Security’s Top 10 Artificial Intelligence Security Actions primer, which offers practical guidance for strengthening security as organizations adopt AI.
Why AI agreements require a different approach
Conventional technology agreements typically address licensing, uptime, support and data security. However, unlike traditional software, where outputs are deterministic and predictable, AI systems generate variable outputs that may change over time as models are updated or retrained. The data that flows through these systems may be used in ways that go well beyond what organizations expect, and the potential for inaccurate, biased or otherwise problematic outputs creates liability questions that standard limitation-of-liability clauses were never designed to answer.
Canadian organizations face additional considerations. Provincial and federal privacy laws impose specific obligations on how personal information is collected, used and disclosed. Where an AI vendor processes personal information on behalf of a customer, the contractual framework must ensure that these obligations are met not only by the primary vendor but also by any subprocessors in the data pipeline.
Data use and training restrictions
Perhaps the most critical issue in any AI vendor agreement is whether the vendor can use your data, client data or the outputs generated from that data to train, retrain or improve its AI models. Many vendor-standard terms include broad language permitting data use for “product improvement” or “service enhancement.” Some frame this in terms of “aggregated” or “de-identified” data, but the practical reality is that once information has been incorporated into model weights through training, it cannot be meaningfully extracted or deleted.
For organizations handling sensitive or confidential information, this creates significant risk. If a vendor trains its model on your data, patterns and insights from your information may influence outputs generated for other customers, including competitors.
Subprocessors and the data supply chain
Many AI vendors rely on multiple subprocessors, such as third-party foundation models, cloud infrastructure providers and vector database services. Your data may flow through multiple entities, each with its own data practices and terms of service.
The primer emphasizes that organizations must understand the full AI supply chain and assess the security posture of each component, including knowing where your data is stored and processed and verifying that third-party providers meet appropriate security standards.
Output ownership and intellectual property
Ownership of AI-generated content remains uncertain under Canadian law, creating risks for organizations that rely on such outputs.
Contracts should clearly assign ownership of all outputs generated using customer data. Where the vendor’s underlying technology is proprietary, the agreement should draw a clear line between the vendor’s intellectual property in its platform and the customer’s ownership of outputs and data.
Performance standards and accuracy
Traditional service level agreements that focus on uptime and response time are inadequate for AI tools, whose accuracy and reliability can shift as models evolve.
Contracts should include measurable performance standards that go beyond availability, such as accuracy thresholds, hallucination rate benchmarks or fairness metrics. Additionally, the primer recommends continuous monitoring and clear contractual remedies to address degraded AI performance.
Cybersecurity and the Canadian Centre for Cyber Security’s AI primer
The primer identifies key security risks associated with AI, including data theft, adversarial manipulation of AI systems and the misuse of AI by threat actors and provides actionable recommendations for mitigating those risks.
Among the primer’s key themes is the recognition that AI systems are increasingly being targeted by sophisticated adversaries, including technology and AI-driven phishing attacks. The Primer emphasizes the importance of securing AI systems throughout their lifecycle, from procurement and deployment through to ongoing monitoring and eventual decommissioning.
Liability allocation and risk transfer
Standard vendor liability caps often fail to reflect the potential harm from faulty AI outputs.
Rather than removing caps entirely, contracts should carve out key AI-specific risks.
Organizations should also require vendors to carry adequate insurance that covers AI-related claims and avoid policies with AI exclusions.
Termination and data portability
Getting out of an AI vendor relationship can be more complicated than terminating a traditional software licence. Standard data deletion clauses may not be sufficient to ensure that your information is fully removed from the vendor’s systems.
Practical steps for organizations
Negotiating AI vendor agreements does not require reinventing the wheel, but it does require a deliberate approach that accounts for the unique characteristics of AI technology. Here are some practical steps to consider:
- Map your data flows before negotiations begin
- Establish benchmarks for evaluating vendor security practices using the primer
- Be prepared to walk away if a vendor refuses to negotiate
- Leverage your team’s legal, technical and privacy expertise to assess vendor claims and contracts
As AI becomes an increasingly integral part of how organizations operate, the contracts that govern vendor relationships must evolve to address the distinct risks that AI introduces.
The MLT Aikins AI and Emerging Technologies team regularly advises organizations on negotiating technology agreements, developing AI-use policies and managing the legal and regulatory risks associated with emerging technologies. If your organization is evaluating AI tools or reviewing vendor agreements, we are here to help.
Note: This article is of a general nature only and is not exhaustive of all possible legal rights or remedies. In addition, laws may change over time and should be interpreted only in the context of particular circumstances such that these materials are not intended to be relied upon or taken as legal advice or opinion. Readers should consult a legal professional for specific advice in any particular situation.







