AI note-taking: Enjoying convenience with a side of caution

The rapid growth of artificial intelligence (AI) has created many helpful tools, including AI note-taking apps, which have increased productivity for professionals. In a previous Insight, we discussed the potential privacy and other risks of using AI note-taking apps. In this companion article, we will highlight a recent incident where an AI note-taking app was enabled during a meeting between physicians at an Ontario hospital, resulting in numerous patients’ personal health information being shared with people who should not have had access to such information.
The Ontario incident
In September 2024, a group of hospital physicians held a virtual meeting during which they discussed patients who had been admitted to the hospital. Former employees were on the meeting invitation list, and one individual on the list had enabled Otter.ai, an AI note-taking tool, to automatically record the meeting. After the meeting concluded, Otter.ai automatically sent a transcript summary of the meeting – including sensitive and personal health information of those patients – to the entire invitation list. Several people on the list were no longer employed by the hospital with one such former employee having used their personal email address and should not have had access to the information provided by the AI note-taking tool.
Considerations for organizations implementing AI
Organizations wishing to use AI should be mindful about the AI tools they are using to create efficiencies in the workplace. Important questions to consider include:
- What information is being collected
- How is that information being used or shared
- What policies do the organization have in place to govern the use of AI tools to ensure privacy is protected
Professionals who have an obligation to keep their client’s information private, such as physicians, nurses or lawyers, should be extra cautious of the risk of accidentally violating their professional duties. The information collected by AI tools can be stored in cloud-based storage systems that may be accessed remotely and are at risk of potential data breaches. Furthermore, as highlighted by the case in Ontario, the failure to responsibly implement these AI tools can result in unintended consequences which put individual users and organizations at risk.
Organizations should prioritize having robust policies and procedures in place for the responsible use of AI tools in their operations, taking care to ensure users are aware of and trained on the permitted uses of such tools. Proper due diligence on the AI platform itself is also important in order to understand, for example, where the data is stored and to consider the applicable data security risks and corresponding mitigation strategies.
The MLT Aikins technology, intellectual property and privacy team can provide the legal perspective your organization needs to understand and navigate issues related to AI. As AI and emerging technologies continue to transform industries, the need for specialized legal services in this domain is becoming increasingly important. Our legal professionals in this field provide critical guidance on navigating the complex regulatory landscape, managing risks and ensuring ethical and lawful use of innovative technologies.
Note: This article is of a general nature only and is not exhaustive of all possible legal rights or remedies. In addition, laws may change over time and should be interpreted only in the context of particular circumstances such that these materials are not intended to be relied upon or taken as legal advice or opinion. Readers should consult a legal professional for specific advice in any particular situation.





