On January 20, 2026, MLT Aikins and Counselwell co-hosted Calgary Breakfast & Learn: AI Rapid-Fire for In-House Counsel, a fast-paced look at how artificial intelligence (AI) is reshaping day-to-day legal practice. Lawyers Jean Torrens, Erika Carrasco and Josh Krane walked attendees through the ways AI is changing employment, litigation and competition law as well as the practical steps counsel can take to manage risk while capturing value.

This article summarizes the key takeaways from that event.

Calgary’s AI momentum

Recent federal support directs $5 million to the University of Calgary to host the national Energy Modelling Hub, positioning the city as a centre for open, evidence-based planning of a reliable and affordable energy system. That investment pairs with additional funding under the Energy Innovation Program to advance modelling, decarbonization scenarios and open-source tools – all signalling that data-driven decision-making is becoming a core capacity for Canadian energy markets.

At the same time, AI is moving from models to real-time operations.

Calgary’s Arcus Power, with the University of Calgary and Elemental Energy, secured $3 million (including $900,000 from SCALE AI) to build an AI platform that intelligently manages battery storage assets. By assessing market prices, grid conditions and battery health, the system will optimize charge and discharge in real time, helping operators increase revenue while reducing equipment wear – all supporting the growing role of storage in Canada’s net-zero transition.

For in-house legal teams, this news points to a near-term future of AI-informed commercial decisions and potential future disputes requiring new considerations in contract clauses for data rights and risk management strategies.

Risk hotspots

Across employment, litigation and competition law contexts, six themes emerged as the highest-impact risks for in-house counsel:

  1. Accuracy and reliability – AI can fabricate citations or misapply law (i.e. hallucinations). Build source‑checking into all workflows and require sign‑off on any final product where AI was involved.
  2. Confidentiality and privilege – AI tools may store prompts and meeting notes, or train on inputs, unintentionally transferring IP or confidential data to third‑party vendors. Prefer enterprise solutions with contractual data protections; define “no‑go” inputs in an internal policy.
  3. Professional responsibility – Counsel remains responsible. Rule 3.1-2 professional obligations under the Law Society of Alberta’s Code of Conduct require competence in technology used, which includes knowing when and how to use (and limit) AI.
  4. Bias and fairness – AI can reproduce or amplify historic bias. Ask vendors about training data and evaluation; introduce simple human-rights-aligned bias checks for HR and operations teams to use.
  5. Security and discovery – Many AI vendors own or control the data provided to them, sometimes storing it outside of Canada, raising discoverability and data-sovereignty issues. Clarify where data is processed and stored; require encryption and access controls; classify AI-created documents for retention and discovery purposes.
  6. Procedural compliance – Courts and tribunals are developing guidance on AI use in filings. Several jurisdictions now require disclosure of how AI assisted in preparing materials. Keep up-to-date on these ever-evolving notices and decisions applicable to your jurisdiction.

Additional practice area-specific considerations

Employment law

AI‑enabled HR platforms are increasingly used for recruitment, screening, monitoring attendance, tracking productivity and assessing accuracy. While efficient, these tools raise privacy, human rights and data-handling concerns.

MLT Aikins helps clients develop AI Use Policies and training, including reviewing collective agreements and other employment-related contracts to ensure these tools are deployed in a manner that is compliant, transparent and consistent with privacy, human rights and workplace obligations.

Litigation and risk management

AI used in litigation raises unique risks around privilege, accuracy and procedural transparency. Courts are increasingly requiring counsel to disclose when AI has assisted in preparing materials, and lawyers remain responsible for verifying all AI-generated content. The Reddy v Saroya, 2025 ABCA 322 decision, for example, reinforces that competence in legal technology is a lawyer’s responsibility.

Competition and commercial law

As businesses begin to rely on algorithmic pricing and AI‑enabled analytics, new competition risks can emerge – particularly if systems behave in ways that unintentionally mirror coordinated market activity.

To mitigate this, counsel should ensure vendor contracts spell out how training data will be used, who owns the resulting intellectual property, what security and privacy controls are in place, and which sub‑processors have access to the data. Procurement agreements should also include bias‑monitoring obligations, audit rights and ongoing testing requirements to ensure the tools continue to operate as intended.

Key takeaways

AI will continue to compress timelines and expand what small legal teams can deliver.

The practical path forward is clear: secure tools, strong policies, disciplined verification and contracts that reflect how AI really works.

To help in-house counsel meet these demands responsibly, MLT Aikins offers:

  • Executive and workplace AI education to update teams on emerging risk, governance expectations, fiduciary obligations and contractual/insurance impacts.
  • Comprehensive AI risk assessments covering data use, storage, deletion, IT readiness, legal frameworks and policy gaps – along with recommended updates before deployment.
  • Development and implementation of internal AI policies, including approved tools, data‑security standards, prohibited inputs, bias screening, human‑verification requirements and recordkeeping.
  • Training, communication and monitoring to ensure employees and third parties use AI tools appropriately and comply with organizational policies.
  • Strategic advisory support on how AI affects corporate strategy, procurement, contracts and operational procedures, including ongoing risk‑management strategies.

If your organization is deploying AI or navigating data-driven markets, our labour and employment, AI and emerging technology, litigation and competition law teams are here to help.

Note: This article is of a general nature only and is not exhaustive of all possible legal rights or remedies. In addition, laws may change over time and should be interpreted only in the context of particular circumstances such that these materials are not intended to be relied upon or taken as legal advice or opinion. Readers should consult a legal professional for specific advice in any particular situation.

Share