The 12-point AI checkup: A practical starting point for Western Canadian organizations

Your organization is almost certainly using artificial intelligence. Whether it is a customer service chatbot on your website, an automated hiring screener in your HR department, an algorithmic pricing tool that automatically adjusts your prices based on demand, or a generative AI assistant your employees adopted on their own initiative AI has moved from pilot projects to daily operations faster than most leadership teams anticipated. The question is no longer whether your organization uses AI – it is whether anyone is governing how it gets used.
For organizations across Western Canada, this question carries real consequences. The regulatory environment is shifting. Governments are starting to enact legislation targeting practices that use AI to discriminate against consumers and employees. Privacy commissioners are actively investigating how AI …. Courts have already held organizations liable for the outputs of their AI tools. And the gap between what AI can do and what your policies account for is widening every quarter.
This Insight is intended as a practical starting point for organizations that know they need to get a handle on AI governance but are not sure where to begin. It is built around twelve points – six foundational governance areas and six immediate risk areas – that together form a comprehensive AI checkup for your organization. It is not a compliance checklist, it is a framework for asking the right questions internally, identifying the areas of greatest exposure and beginning to build the structures that will position your organization well as the regulatory landscape continues to evolve. It concludes with a health check your leadership team can work through together, organized around all twelve points.
Why AI governance matters now – even without a comprehensive AI law
Canada does not yet have a comprehensive federal statute specifically regulating artificial intelligence. The proposed Artificial Intelligence and Data Act, which formed part of Bill C-27, did not proceed when Parliament was prorogued in January 2025. Canada issued the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems (the Code) as guidance in the absence of formal legislation. In May 2025, the federal government appointed Canada’s first Minister responsible for Artificial Intelligence and Digital Innovation, signaling that AI policy remains a priority and that new legislation is likely on the horizon.
However, the absence of a purpose-built AI statute does not mean AI is unregulated. The existing legal framework already imposes meaningful obligations on organizations that deploy AI systems, and Western Canadian organizations are subject to a combination of federal and provincial laws that apply squarely to AI-related activities.
The primary legislation that governs AI use is privacy law, although Canada’s federal Competition Act does create general marketplace framework laws relating to agreements between competitors and monopolization, which can include how data are collected and used.
In Alberta and British Columbia, the provincial Personal Information Protection Acts (PIPA) govern private-sector collection, use and disclosure of personal information and have been recognized as “substantially similar” to the federal Personal Information Protection and Electronic Documents Act (PIPEDA). In Saskatchewan and Manitoba, PIPEDA applies directly to private-sector commercial activity because those provinces have not enacted their own substantially similar private-sector privacy legislation (Manitoba’s previous attempts to pass equivalent legislation through Bills 200 and 207 were unsuccessful). Regardless of which statute applies, any AI system that processes personal information triggers obligations around consent, purpose limitation, transparency and security safeguards that your organization must meet today.
Alberta’s legislative review of PIPA is also worth monitoring closely. The committee reviewing the statute completed its report in early 2025 and recommended material amendments, including a penalty-based enforcement regime and requiring clear definitions, standards and regular risk assessments for handling de-identified and anonymized data – all of which are directly relevant to AI systems that process large volumes of information.
The practical takeaway is straightforward: Waiting for a federal AI law before building internal governance means accepting a period of unmanaged risk under laws that already apply. Organizations that begin now will be better positioned to adapt when the legislative landscape does change, rather than scrambling to retrofit compliance after the fact.
Points 1–6: Where AI governance needs to start
AI governance can seem overwhelming in the abstract. The first six points of the AI checkup focus on the foundational governance structures that every Western Canadian organization needs in place. These are the building blocks and without them, managing the specific legal risks that follow is nearly impossible.
1. Know what AI you have
The single most important first step in any AI governance program is developing a clear picture of what AI tools your organization is actually using. This sounds simple, but in practice it is one of the most common blind spots. Organizations frequently discover that employees across departments have adopted AI tools – generative AI writing assistants, transcription services, data analysis platforms – without IT oversight or management approval. This is sometimes referred to as “shadow AI,” and it represents a significant source of unmonitored data exposure and compliance risk.
Your AI inventory should identify every AI system in use, including tools embedded in existing software platforms your organization already licenses (such as built-in AI writing assistants and automatic meeting summaries in video conferencing platforms). For each tool, your organization should be able to answer these basic questions:
- What data does it process?
- Where is that data stored?
- Does the vendor use client data to train or improve its models?
- Is the data stored in Canada or transmitted to servers in other jurisdictions?
Without this baseline inventory, governance is impossible because you cannot manage what you have not identified.
2. Establish acceptable use boundaries
Once you know what AI tools are in use, the next step is to establish clear internal rules about how AI may and may not be used within your organization. An AI acceptable use policy does not need to be lengthy or complex, but it does need to exist in writing and be communicated to all employees and, if applicable, other personnel such as contractors.
At minimum, an acceptable use policy should address which categories of information may and may not be entered into AI tools (with particular attention to personal information, confidential business data and legally privileged material), which AI tools are approved for use and which are not, what human review and oversight is required before AI-generated outputs are relied upon or shared externally, and who within the organization is responsible for approving new AI tools or use cases. Policies may also include guidance on employee training, intellectual property considerations (such as ownership of AI-generated outputs) and whether employees must disclose to clients or other stakeholders that AI was used in producing a work product.
Organizations that fail to set these boundaries often find that their employees are making individual risk decisions that no one in leadership would have approved if asked.
3. Address privacy obligations head-on
For most Western Canadian organizations, privacy law is the regulatory framework that bites hardest in the AI context. AI systems are data-intensive by design. They collect, process and generate outputs based on personal information in ways that can easily exceed the scope of consent your organization has obtained or that may not align with the purposes for which the information was originally collected.
Organizations deploying AI systems should be asking whether they have authority for the personal information their AI tools process, whether their privacy notices and consent mechanisms are adequate to cover AI-related uses of personal information, whether they have conducted or updated privacy impact assessments for AI-driven systems and whether their data retention and de-identification practices account for the way AI systems store and reuse information.
The federal Privacy Commissioner has been actively filling the regulatory gap through investigations, guidance documents and public statements emphasizing necessity, proportionality and transparency in AI-related data use. Provincial privacy commissioners have similarly signaled heightened expectations. For example, the Office of the Information and Privacy Commissioner of Alberta has already provided guidance for mandatory privacy impact assessments for AI scribe tools used in the healthcare industry. Organizations that treat privacy compliance as an afterthought in their AI adoption risk enforcement action under laws that already exist and apply.
4. Assign clear internal accountability
AI governance fails when no one owns it. In many organizations, AI adoption has been driven by individual departments (marketing, HR, operations, finance, etc.) with each adopting tools independently and without coordination. The result is a fragmented landscape where no single person or team has visibility into the organization’s overall AI exposure.
Effective AI governance requires someone to be accountable. That does not necessarily mean creating a new role or hiring an AI specialist. It does mean designating a person or a small cross-functional group with responsibility for maintaining the AI inventory, overseeing the acceptable use policy, coordinating with legal counsel on compliance questions and serving as the point of contact when AI-related issues arise.
For many mid-sized Western Canadian organizations, this responsibility may sit naturally with an existing privacy officer, compliance lead or general counsel. What matters is that the accountability is explicit, documented and supported by leadership.
5. Train your board and your people
An AI governance program is only as strong as the people expected to follow it. This is where many Western Canadian organizations have a significant gap: They may have adopted AI tools and even drafted an acceptable use policy, but they have not invested in ensuring that their board of directors, senior leadership and frontline staff actually understand the risks those tools create or the obligations the organization has assumed.
Board-level AI literacy is particularly important. Directors have fiduciary obligations to oversee risk, and AI introduces categories of risk – algorithmic bias, privacy exposure, reputational harm from AI-generated outputs and regulatory non-compliance – that many boards are not yet equipped to evaluate. Board members do not need to become technical experts, but they do need to understand enough about how AI systems work to ask informed questions, assess management’s risk reporting and exercise meaningful oversight. A board that cannot engage substantively with AI-related risk is a board that is not fulfilling its governance role.
For staff, training needs to be practical and role specific. A customer service team using an AI chatbot needs to understand the limits of that tool and when to escalate. A marketing team using generative AI to draft content needs to understand the intellectual property and accuracy risks involved. An HR team using AI-assisted screening tools needs to understand the human rights implications of automated decision-making. Generic, organization-wide awareness sessions have their place but they are not a substitute for targeted training that connects AI risk to the actual work people do every day.
Training should also be recurring, not a one-time event. The AI tools your organization uses will change, the regulatory expectations will evolve, and the risks will shift accordingly. An annual refresh – at minimum – ensures that your people’s understanding keeps pace with your organization’s AI footprint.
6. Monitor the regulatory horizon
The regulatory landscape for AI in Canada is not static. Organizations that build governance programs in isolation from regulatory developments risk building structures that do not align with where the law is headed. At the federal level, the core concepts from AIDA and the Code – risk-based classification of AI systems, requirements for human oversight and accountability obligations – continue to influence regulatory thinking and are expected to inform any successor legislation.
At the provincial level, legislative reviews and regulatory guidance are accelerating. The federal Directive on Automated Decision-Making applies to government use of AI but provides a useful reference model for private-sector organizations seeking to benchmark their own governance practices. The Competition Bureau has also been examining how AI intersects with competition law, including concerns around algorithmic pricing and AI-driven deceptive marketing practices. And provincial human rights legislation applies to AI-driven decisions that result in discriminatory outcomes, regardless of whether the discrimination was intentional.
Organizations do not need to predict exactly what the next piece of legislation will look like. They do need governance structures that are flexible enough to adapt as requirements evolve, rather than rigid frameworks that will need to be rebuilt from scratch.
Points 7–12: Six risk areas your organization should have on its radar
With the governance foundations in place, the second half of the AI checkup turns to the specific legal risk areas where Western Canadian businesses are most likely to encounter immediate exposure. These are the issues that will test whether your governance structures actually work.
7. You are liable for what your AI says
If your organization uses a chatbot, virtual assistant or any automated system that communicates with customers, you need to understand that you are responsible for what it says,even when it gets things wrong. For example, in Moffatt v Air Canada, 2024 BCCRT 149, Air Canada was held liable for incorrect information provided by its customer service chatbot. The chatbot suggested a customer could retroactively apply for bereavement fares, contradicting Air Canada’s standard procedure. The customer successfully claimed damages based on his reliance on the chatbot’s representations.
This is consistent with well-established principles that an entity remains responsible for the acts of its tools, including pre-programmed and automated systems. For Western Canadian businesses deploying customer-facing AI tools, the practical lesson is straightforward: Do the due diligence, audit your AI tools regularly for accuracy, ensure the information they provide aligns with your current policies and practices and review your terms of service and disclaimers for adequacy. Document any audits and testing activities you conduct on your AI tools and make sure the employees who use those tools are aware of their responsibility to independently verify any AI-generated output.
8. Intellectual property (IP) risks are not theoretical
If your organization uses generative AI to create content, marketing materials, reports or other work products, you face a set of unresolved intellectual property questions that create real commercial risk right now.
The first question is whether your organization can own what it creates with AI. Under the Copyright Act, copyright subsists in original works created by an “author,” and Canadian law has historically understood authorship as requiring a human creator who exercises “skill and judgment.” The Canadian Intellectual Property Office has registered at least one work listing AI as a co-author, but that registration is being challenged before the Federal Court by the Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic, which argues that AI cannot function as an author for copyright purposes in Canada. The matter has not yet been decided, and the outcome of the challenge may provide greater insight into Canada’s approach to AI ownership. The Government of Canada’s consultation on copyright in the age of generative AI confirmed broad support among stakeholders for keeping human authorship central to copyright protection. Where generative AI produces content with minimal human creative input, it remains uncertain whether copyright protection attaches to that output at all – leaving your organization potentially unable to protect what it has paid to create.
The second question is infringement risk from AI training data. Large language models and image generators are trained on vast datasets that may include copyrighted material. A coalition of Canadian news publishers has filed a lawsuit against OpenAI, alleging that their content was used without permission to train ChatGPT, and that lawsuit has survived a jurisdictional challenge and will proceed in Ontario. Multiple additional class actions have been filed in Federal and Provincial courts across Canada. If your organization uses generative AI outputs in its products, marketing or client-facing work, you may face downstream exposure if those outputs incorporate or closely replicate copyrighted material from training datasets.
The third question is what might be called the “black box” problem. Generative AI systems may not be transparent about the sources they draw on when producing content, which makes it difficult for your organization to conduct meaningful due diligence on whether AI-generated outputs may infringe third-party IP rights.
these risks warrant clear internal policies on the use of generative AI, including restrictions on using AI-generated content in situations where IP protection is critical.
9. AI procurement contracts need specific attention
If your organization is acquiring AI-powered services – whether a customer service chatbot, a data analytics platform or an automated decision-making tool – your standard service agreements and software licence terms are likely not adequate to address the unique risks that AI introduces.
AI procurement contracts should specifically address several categories of concern. IP ownership provisions must clearly allocate rights in the AI model itself, in the training data used and in the outputs generated. Data-handling clauses must address how your data will be used, whether it will be used to train or improve the vendor’s models and where it will be stored and processed – particularly given cross-border data transfer implications under privacy laws.
Agreements should also address bias and accuracy. Vendors should be required to provide transparency about testing and validation processes and to warrant that their systems have been evaluated for discriminatory outputs. AI systems that produce discriminatory outcomes can expose your organization to liability under the Canadian Human Rights Act and the applicable provincial human rights code, regardless of whether the discrimination was intentional. Indemnification provisions should account for errors, hallucinations and autonomous decision-making – risks that are qualitatively different from the bugs and defects contemplated by traditional software warranties.
Compliance provisions must account for the evolving regulatory landscape. Your organization needs the contractual ability to require vendor cooperation with regulatory investigations, to obtain information necessary for privacy impact assessments and to ensure ongoing compliance with any new AI-specific legislation that may be enacted during the term of the agreement.
10. Deepfakes and synthetic media are a business risk
Deepfakes and synthetic media represent a growing vector of legal and reputational risk . Advances in generative AI have made it possible for anyone with modest technical skill to create false or misleading audio and video content that appears real.
The Competition Bureau of Canada has specifically flagged the use of deepfakes in deceptive marketing as a serious concern. Over a third of the submissions received during the Bureau’s 2024 public consultation on AI and competition expressed concern regarding AI’s potential ability to be used for deceptive marketing practices, including the generation of fake online reviews, endorsements, impersonations and tailored phishing campaigns. The Bureau has emphasized that the Competition Act already prohibits representations that are false or misleading in a material respect and that this prohibition applies regardless of whether AI is involved in generating the content.
For your organization, the risks are multidirectional. You face potential victimization through deepfake impersonation of your brand, executives or products. You also face liability exposure if you use AI-generated content in your own marketing without adequate disclosure or verification. Additionally, your employees and customers may be targeted by increasingly sophisticated AI-generated phishing and social engineering campaigns. Internal protocols should address authentication of public communications, verification of marketing materials for AI-generated content and employee training on recognizing synthetic media.
Additionally, an Ontario court recently commented on a gap in Canada’s criminal laws with respect to deepfakes (see R v R.K.1, 2025 ONCJ 542). The federal government previously proposed the Online Harms Act under Bill C-63, which would have addressed deepfakes, but did not proceed. The federal AI Minister has signaled that updated legislation targeting deepfakes is coming, and organizations should stay up-to-date to avoid non-compliance.
11. AI in the workplace creates employment and human rights exposure
If your organization uses AI in hiring, performance management or employee monitoring, you are operating in an area of rapidly increasing legal scrutiny. AI-powered recruitment tools – resumé screeners, applicant tracking systems, chatbot pre-screening and predictive models that rank candidates – are now common. Many of these tools operate behind the scenes. In many cases, employers do not control how the systems are trained or what data they rely on.
The legal risk is straightforward: Canadian human rights legislation does not care whether discrimination was intentional, unintentional or produced by a machine. If a hiring process screens out candidates in a way that disproportionately impacts protected groups – whether on the basis of race, gender, age, disability or any other protected ground – that can ground a human rights complaint regardless of the employer’s intent.
Employment regulators’ first steps in this area appear to be focused on transparency. For example, amendments to Ontario’s Employment Standards Act, 2000 that came into force on January 1, 2026, require Ontario employers with 25 or more employees to disclose the use of artificial intelligence when screening, assessing, or selecting job applicants.
AI-powered employee monitoring is a parallel area of concern. A 2023 joint resolution of Canada’s federal, provincial and territorial Privacy Commissioners called on governments to close legislative gaps in employee privacy protection and called on employers to respect the principles of reasonableness, necessity and proportionality when deploying electronic surveillance and AI monitoring technologies in the workplace. The Commissioners specifically called on employers not to use AI technologies to make significant decisions about an employee’s performance, candidacy or employment prospects without a “human-in-the-loop.” They also noted that intense monitoring disproportionately affects workers who are low income, younger, have disabilities or are racialized.
For Western Canadian organizations, the practical steps are to audit any AI tools used in hiring or employee management for discriminatory outputs, ensure human oversight of AI-driven employment decisions, provide clear transparency to employees and applicants about AI use and conduct privacy impact assessments before deploying monitoring technologies.
12. Competition and consumer protection laws applies to your AI-driven pricing and marketing
If your organization uses AI-driven pricing tools, recommendation engines or automated marketing systems, you need to understand that competition law applies to those tools just as it applies to manual pricing decisions.
The Competition Bureau published its report on AI and competition in January 2025, summarizing 28 submissions received during its 2024 public consultation. Algorithmic pricing – the use of AI-powered systems to set and adjust prices – was flagged as a potential mechanism for tacit collusion, in which AI systems autonomously align on prices without explicit human instruction, communication or agreement. The concern is particularly acute in the “hub-and-spoke” scenario, where multiple competitors use the same third-party pricing algorithm, potentially creating indirect coordination of pricing strategies.
In January 2026, the Bureau published a further report on algorithmic pricing following a dedicated consultation that ran from June to August 2025, receiving over 100 submissions. That report identified four key themes:
- Algorithmic pricing can create market efficiencies
- It can also lead to anti-competitive behaviour
- A lack of data transparency could harm consumers, workers and competition
- Regulations should address anti-competitive conduct without stifling innovation
AI-powered deceptive marketing was another major concern. The Bureau has emphasized that the Competition Act‘s prohibition on false or misleading representations applies fully to AI-generated content. The Bureau has also identified artificial intelligence as a sector-specific enforcement priority in its 2025–26 annual plan, and recently investigated RealPage, a U.S.-based software company whose algorithm is used by landlords to set rental prices. The Bureau ultimately discontinued its investigation after determining use of RealPage was not sufficiently widespread in Canada as to substantially harm competition. However, the Bureau flagged concerns with the broader practice of competing businesses using shared algorithmic tools to set prices.
We are also now starting to see a shift toward rules that limit personalized algorithmic pricing. In March 2026, the Manitoba government introduced an amendment to the The Manitoba Business Practices Act (MBPA) to make it an unfair business practice for bricks-and-mortar retailers, online retailers and online marketplaces to use personalized algorithmic pricing, including through the use of AI tools, to increase what they charge a specific consumer. The Bill proposes to add transparency obligations requiring that algorithmic pricing be disclosed as a material fact in consumer transactions.
For Western Canadian businesses, the practical takeaway is that AI-driven tools must be deployed with competition and consumer protection laws in mind. Avoid sharing competitively sensitive information through common algorithm providers, ensure that AI-driven pricing systems include human oversight and review all AI-generated marketing content for accuracy and compliance with the Competition Act.
The 12-Point Health Check
The following questions map to each of the twelve points above and are designed to help your leadership team assess where your organization stands today. They are intended as a conversation starter, not a pass-or-fail test. If your organization can answer “yes” to most of these questions, you are in a strong position. If several of them reveal gaps, those gaps represent areas of current risk that are worth addressing sooner rather than later:
1. Inventory and visibility
Does your organization maintain a current list of all AI tools in use across departments, including tools adopted by individual employees? For each AI tool, can you identify what data it processes, where that data is stored and whether the vendor uses your data to train its models? Have you identified and addressed any “shadow AI” (tools being used without formal organizational approval or IT oversight)?
2. Policy and boundaries
Does your organization have a written AI acceptable use policy? Does that policy specify which categories of information may not be entered into AI tools? Does it identify which AI tools are approved for organizational use and which are not? Is the policy communicated to all employees, including new hires?
3. Privacy and data protection
Have you assessed whether your existing privacy notices and consent mechanisms are adequate to cover AI-related uses of personal information? Have you conducted or updated privacy impact assessments for AI systems that process personal information? Do your data retention and de-identification practices account for how AI systems store, reuse and generate information?
4. Accountability and oversight
Is there a designated person or team within your organization responsible for AI governance? Does that person or team have visibility into AI adoption across all departments? Is AI governance a standing item on your leadership team’s agenda or does it arise only on an ad hoc basis?
5. Board and staff training
Has your board of directors received a briefing on AI-related risks, including privacy exposure, algorithmic bias and liability for AI-generated outputs? Can your board members ask informed questions about AI risk and exercise meaningful oversight of management’s AI-related decisions? Have staff who use AI tools in their day-to-day roles received practical, role-specific training on the risks and limitations of those tools? Is AI training recurring or was it a one-time event? Have you assessed whether your training program keeps pace with changes in the AI tools your organization uses?
6. Regulatory preparedness
Is someone in your organization monitoring developments in Canadian AI regulation at both the federal and provincial levels? Are your governance structures designed to be adaptable as new legislative requirements emerge? Have you considered how the federal government’s approach to risk-based classification of AI systems might apply to the AI tools your organization uses?
7. Liability and customer-facing AI
Do you use chatbots, virtual assistants or other automated systems that communicate with customers? Are those tools regularly audited for accuracy? Have you reviewed your terms of service and disclaimers? Is there a process for escalating customer complaints that arise from AI-generated responses? Does your organization have a plan for responding to an AI-related incident – such as a data breach caused by an AI tool, an AI system producing a discriminatory outcome, or an AI chatbot providing incorrect information to a customer? Do your existing incident response and breach notification procedures account for AI-specific failure modes?
8. Intellectual property
Do you have clear policies governing the use of generative AI in creating content, marketing materials or work product? Do those policies address the risk that AI-generated outputs may not be eligible for copyright protection or may incorporate third-party copyrighted material? Have you considered whether content created with generative AI is sufficiently protected for commercial use?
9. AI procurement
Do your contracts with AI vendors address IP ownership of outputs, data handling and residency, model training restrictions and cross-border data transfers? Do your procurement processes include evaluation criteria for AI-specific risks, including accuracy, bias and security? Do your vendor agreements include audit rights, indemnification for AI errors and hallucinations and requirements for vendor cooperation with regulatory inquiries?
10. Deepfakes and synthetic media
Does your organization have protocols for authenticating public communications and verifying the provenance of marketing materials? Have your employees received training on recognizing AI-generated phishing, impersonation and social engineering? Do you have a response plan for deepfake-related incidents involving your brand or executives?
11. AI in hiring and employee monitoring
If you use AI tools to screen, rank or assess job applicants, have those tools been audited for discriminatory outputs across protected grounds? Is there meaningful human review of AI-driven hiring decisions before they become final? Are applicants informed that AI is being used in the hiring process? If you use AI-powered employee monitoring tools – including productivity tracking, keystroke logging or location monitoring – have you assessed whether the monitoring is reasonable, necessary and proportionate? Have you provided employees with clear notice of what is being monitored and why? Have you conducted a privacy impact assessment for your employee monitoring practices?
12. Competition, consumer protection and marketing
If you use AI-driven pricing tools, have you assessed the competition and consumer protection law implications, including the risk of algorithmic coordination? Have you considered whether your use of third-party pricing algorithms could create a “hub-and-spoke” dynamic with competitors? Do you review AI-generated marketing content for compliance with the Competition Act‘s prohibitions on false or misleading representations? Do AI-powered personalized algorithmic pricing strategies raise unfair business practice concerns under consumer protection laws?
If your answers to these questions reveal significant gaps across the twelve points, the time to begin addressing them is now. No organization will score perfectly on all twelve – the landscape is moving too fast for that. But the organizations that will be best positioned are the ones that have worked through these questions, identified their gaps honestly and started closing them. The legal obligations that apply to AI use in Western Canada are not hypothetical or future-looking – they exist today under privacy, human rights, competition, and common law frameworks that are already being enforced. Building a governance program now, even an imperfect one, is materially better than waiting for a crisis or a regulatory inquiry to force the issue.
Note: This article is of a general nature only and is not exhaustive of all possible legal rights or remedies. In addition, laws may change over time and should be interpreted only in the context of particular circumstances such that these materials are not intended to be relied upon or taken as legal advice or opinion. Readers should consult a legal professional for specific advice in any particular situation.









