Authors
Partner, Disputes, Toronto
Associate, Disputes, Toronto
Associate, Disputes, Toronto
Introduction
On December 5, 2024, the Canadian Securities Administrators (CSA) published a notice intended to provide clarity and guidance on how securities legislation applies to the use of artificial intelligence (AI). Staff Notice and Consultation 11-348 – Applicability of Canadian Securities Laws and the use of Artificial Intelligence Systems in Capital Markets (the Staff Notice) sets out the following critical points on the use of AI in capital markets:
- Key considerations: The Staff Notice outlines essential considerations for various entities such as registrants, reporting issuers, and other market participants that may utilize AI systems. The Staff Notice defines “AI system” as “a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
- Transparency, accountability, and risk mitigation: The Staff Notice underscores the significance of maintaining transparency and accountability, as well as the necessity to mitigate risks arising from the use of AI systems.
- Stakeholder engagement: Finally, the Staff Notice invites feedback from stakeholders through a series of consultation questions. The request for feedback is designed to gather insights on the evolving role of AI systems and to explore the potential need for adjustments or enhancements to the current regulatory and oversight frameworks in response to technological advancements.
Importantly, the Staff Notice does not establish new securities laws or amend existing ones. Instead, it provides guidance on the application of current securities laws in scenarios where AI is employed. In this blog post, we canvass the most salient guidance for market participants contained in the Staff Notice, including guidance on
- general requirements for deploying AI systems
- AI governance and oversight
- disclosure of AI systems
- the risk of AI washing
General requirements for deploying AI systems
The Staff Notice stresses the importance of deploying AI systems with a high degree of explainability to promote transparency and assist market participants with satisfying their obligations under securities law. Market participants deploying AI systems for decision making should ensure that clear and comprehensible reasoning is used so it is clear why or how a decision was made. Certain types of AI systems with a lower degree of explainability, referred to as “black boxes”, may challenge concepts of transparency, accountability, record keeping, and auditability.
Furthermore, the Staff Notice underscores the necessity for regular testing of AI systems, which should be proportional to the significance of the AI system’s role within the organization. This testing should be carried out by individuals who have the appropriate level of expertise. This approach ensures that AI systems are reliable and compliant with the regulatory expectations set forth in securities laws.
AI governance and oversight
The Staff Notice states that governance and risk management practices should be paramount for market participants whose business activities are regulated under securities law when deploying AI systems in capital markets. In this regard, market participants are expected to develop policies and procedures tailored to address the use of AI systems and any associated risks.
The Staff Notice recommends a non-exhaustive list of considerations to include in supervisory controls, policies, and procedures, which include the following:
- considering AI system planning, design, verification, and validation, and ensuring that robust testing prior to deployment has taken place
- having a “human-in-the-loop”, where humans can effectively monitor inputs and outputs of an AI system
- ensuring adequate AI literacy of those using the outputs of AI systems to ensure that the users of a given output are using it for its intended purpose
- adequate measures to mitigate the technological and operational risks related to the use of AI systems (e.g. initial and ongoing monitoring of cybersecurity risks, AI system bias, model drift and hallucinations)
- the importance of data integrity in AI systems, including ensuring data accuracy and completeness, preventing inclusion of prohibited information, and accounting for privacy considerations
- an examination of the full supply chain of an AI system throughout its lifecycle, including third-party service providers, the cloud services and data sources used
Additionally, the Staff Notice reminds market participants that they are responsible and accountable for all outsourced functions. Registrants, for example, are expected to create tailored policies and procedures to address the unique risks posed by AI systems developed and operated by third parties.
Disclosure of AI systems
The Staff Notice advises market participantsto be mindful of their disclosure obligations when it comes to AI systems, and specifically notes the following:
- Investment fund managers are expected to disclose a fund’s use of AI systems where such use is marketed as a material investment strategy.
- Non-investment fund reporting issuers should consider disclosure of AI system use in business operations, as well as the development of products or services that rely on AI systems, where material.
- Disclosure must include unique risk factors associated with AI systems to enable investors and securities regulators to better understand the risks associated with the use of AI systems. Investment fund managers, for example, should be mindful of the risk of model drift, where a fund manager begins investing outside of a fund’s stated investment objective, associated with the use of AI systems. Non-investment fund issuers should consider and, where appropriate, disclose AI-related risk factors such as operational, third-party, ethical, regulatory, competitive, and cybersecurity risks.
- Issuers should consider whether making statements about the prospective use of AI systems in their continuous disclosure constitutes forward looking information, in which case, there must be a reasonable basis for the forward-looking information.
The risk of ‘AI washing’
The guidance in the Staff Notice balances the importance of fulsome disclosure of a market participant’s use of AI systems with caution against making inaccurate, false, misleading, or embellished claims regarding a market participant’s use of AI. This practice is commonly referred to as “AI washing”.
The Staff Notice warns that to avoid AI washing, vague and unsubstantiated statements that incorporate jargon in order to attract investors should not be made. Rather, the Staff Notice expects AI systems disclosures to be fair, balanced, and not misleading. The Staff Notice specifically cautions investment fund managers to exercise caution in AI-related disclosure in sales communications, to prevent untrue or misleading statements.
Next steps
As noted, the guidance provided in the Staff Notice is based on existing securities legislation and does not create new legal requirements or modify existing ones. However, alongside the guidance, the Staff Notice has provided several consultation questions to determine whether regulatory adjustments are required to accommodate AI systems in capital markets. Based on the consultation questions, it is clear that Staff’s consideration of the implications of AI is still at an early stage, and they are seeking input on whether and to what extent existing laws should be modified to reflect the use of AI in a wide range of circumstances. Comments will be accepted in writing until March 31, 2025. The consultation questions and submission instructions can be found under Part III of the Staff Notice.
Takeaways
The Staff Notice emphasizes the regulatory attention being paid to the use of AI in capital markets. It highlights key areas of concern for regulators, which include the impact of AI on financial disclosures, the management of conflicts of interest, and the delegation of tasks that require registration to AI systems. The notice also sets out the expectation that firms will establish robust governance and oversight mechanisms when implementing AI technologies. Furthermore, the Staff Notice conveys a preference for AI systems that offer a high level of explainability, suggesting that regulators are more likely to favour systems whose decisions can be easily understood and traced.
It is clear from the scope of the questions that the CSA is seeking input on that this guidance marks the beginning of the process of Staff coming to grips with this issue. CSA’s guidance is likely to evolve following the consultation period, and as the implications of the use of AI manifest themselves in the market. Nevertheless, the Staff Notice’s focus on AI systems is a welcome step toward making sure that fintech regulation keeps pace with innovations in the industry.