2024 OSLER LEGAL OUTLOOK

Unlocking AI innovation

Dec 5, 2024 9 MIN READ    14 MIN LISTEN
00:00

Artificial intelligence (AI) is more than a technological trend; it is a transformative force reshaping industries from customer service to data analysis and content creation. As a catalyst for and driver of innovation, AI has empowered businesses to automate both mundane and complex tasks, uncover new insights, and enhance decision-making.

Alongside these opportunities, concerns about the possible risks associated with AI applications have also intensified. Among the key risks are potential biases in algorithms, leading to unfair treatment of affected individuals, as well as threats to data privacy from the data collection associated with sheer volume of information required for machine learning. In response to these risks, enhanced requirements are being proposed and are gradually coming into force. To safely unlock AI’s full potential, organizations must take measures to de-risk the technology, both to comply with legal requirements and to protect themselves and their users. This involves navigating a complex and evolving landscape of standards, guidelines, legislation, and contractual frameworks. Proactive de-risking provides a strong foundation for sustained and responsible innovation.

To date, one in seven Canadian businesses [PDF] has adopted AI across their operations. Adoption rates are expected to increase dramatically in 2025 and beyond, with an expectation of a 50% adoption rate across businesses within the next three to six years [PDF]. With continued evolution of AI technologies, AI could grow Canada’s productivity between 1% to 6% over the next decade [PDF].

While the return on investment from AI innovation is promising, businesses must pay close attention to AI’s associated risks in order to safely and effectively deploy the technology and take advantage of the opportunities it offers.

A roadmap for the future: AI standards and guidelines

Standards and guidelines provide a roadmap that enables businesses to innovate within defined, trusted boundaries. By adopting industry-recognized standards and guidelines, organizations can structure their development and deployment of AI based on accepted principles of safety, fairness, and accountability. This foundation mitigates risk and can give organizations the confidence to pursue new AI applications, while fostering stakeholder trust. This increased confidence can, in turn, accelerate the path from concept to deployment.

One clear challenge in the AI sector is the variety of standards and frameworks that have been published in recent years. While these documents establish foundational standards for the responsible use of AI, they are not all the same. Determining which standards best suit the context and enhance confidence in the particular AI application requires careful consideration.

One prominent publication is the AI Risk Management Framework [PDF] developed by the National Institute of Science and Technology (NIST). The framework sets out voluntary principles to improve trustworthiness in the design, development, use and evaluation of AI products, systems and services. One advantage to adopting the NIST framework is that it offers actionable tools for implementation, including the Generative AI Profile [PDF], NIST AI RMF Playbook, AI RMF explainer video and AI RMF Roadmap, which provide hands-on guidance for organizations developing their approaches to AI risk management.

Another key instrument is the ISO/IEC 42001 standard developed by the International Organization for Standardization (ISO). This standard offers a comprehensive management system addressing ethics, transparency, accountability, bias mitigation, safety, and privacy in AI. It was created by a team of global and regional experts as part of a broader set of over 33 AI-related standards published by the ISO to help organizations manage AI-related risks and opportunities. As ISO/IEC 27001 is familiar to many Canadian organizations that have adopted the standard for information security and cybersecurity, these same organizations may find the ISO/IEC 42001 standard well-suited for establishing consistency in their approach to de-risking AI.

Find out how Osler’s Artificial Intelligence team can help your organization navigate the challenges and benefits of AI.
Learn more

Adding to these foundational documents are the Organisation for Economic Co-operation and Development (OECD) AI Principles. These principles are the first intergovernmental standard addressing AI. They include a set of value-based principles for developing trustworthy AI and recommendations for policymakers when creating AI policies. An advantage of the OECD principles lies in their emphasis on global interoperability, making them a useful reference for organizations aiming for international compliance. There are 47 adherents to the OECD AI Principles, consisting of all OECD member countries (including Canada and the United States) and several non-member countries (including Brazil, Argentina and Columbia). These members use these guidelines to shape policies and create AI risk frameworks, fostering alignment across different jurisdictions.

One pragmatic approach to assessing which framework to use involves carefully considering the organization’s specific operational contexts, regulatory environment, and strategic goals. This assessment should include an evaluation of the standards most applicable to the industry, the regions in which the organization operates, the specific AI technologies developed or deployed, and the standards the organization currently complies with for other purposes.

Many organizations find standards and guidelines invaluable for shaping internal guidance frameworks. Additionally, in the absence of formal regulation, standards often act as regulatory proxies until legislation comes into force. Given the speed at which AI adoption is occurring, further development in AI standards and governance will surely be forthcoming.

Evolving regulatory environment for AI

In 2024, we witnessed an increase in regulations aimed at addressing the risks associated with AI, focusing on the likelihood and severity of potential harm these technologies may pose. Several jurisdictions are currently looking to expand, develop or implement AI regulation in ways that will have an impact in 2025 and beyond.

While regulation can sometimes be perceived as a barrier to innovation, thoughtfully crafted AI regulations can instead unlock innovation by creating a structured, risk-based approach that balances safety with the freedom to innovate.

For businesses seeking to innovate, studying these legislative developments can assist in understanding the evolving regulatory landscape more effectively and position these businesses to innovate with confidence.

These emerging regulations often intersect with existing frameworks, guidelines, and standards. Many jurisdictions, including the European Union and the United States, are incorporating principles from established frameworks, including the OECD AI Principles, to guide their approach in defining AI and determining how AI will be regulated. As more jurisdictions look to implement regulation, the sharing and intersection of these approaches will undoubtedly become more evident.

In Canada, the Artificial Intelligence and Data Act (AIDA) [PDF], with proposed amendments [PDF] by Innovation, Science and Economic Development Canada in November 2023, is a prosed regulatory framework that applies to general purpose systems, high-impact systems and models that form part of such systems. High impact systems include AI systems that determine matters related to employment or to the prioritization of the services to be provided to individuals. Depending on the classification of the AI system and whether an organization is developing or deploying such systems, AIDA would introduce compliance requirements that include assessments and mitigation of the risks of harm of AI. AIDA is part of a more comprehensive overhaul of federal privacy, consumer and data protection legislation currently under clause-by-clause review, which is discussed in more detail in our Osler Legal Outlook privacy article. While AIDA may not be enacted before a federal election is called in 2025, the current version of the legislation may inform future AI regulation in Canada.

The EU AI Act, which came into force in the European Union in August 2024, classifies AI systems into the following risk categories: unacceptable, high, limited and minimal. Risks are determined based on their potential impact on health, safety, fundamental rights, or the environment. This risk-based approach imposes stricter obligations for high-risk AI systems, including data governance practices, as well as requirements that that training, validation and testing datasets must be sufficiently representative and error-free to the best extent possible. The risk based approach to evaluating AI has been followed in the United States at the state-level, including Colorado’s consumer protections [PDF] based on AI risk level.

As legislative measures continue to evolve, organizations should adopt general principles to comply with existing legislation and prepare for future regulations. These generally should reflect a risk-based approach that requires regularly assessing AI systems for potential risks, proactively ensuring transparency in AI use, and implementing measures to mitigate identified risks.

Understanding the regulatory landscape can help organizations confidently allocate resources to projects that are less likely to encounter compliance obstacles. By designing with compliance in mind and considering both existing and developing legislation in key global markets from the outset, businesses can develop AI applications with confidence, foster greater trust among those procuring AI services, and encourage broader AI adoption.

AI is also affecting contracts

At the same time, norms in contracting are evolving that define expectations and responsibilities around AI use, especially as companies collaborate on complex, data-intensive projects. Parties can foster innovation while managing risks by discussing and reaching clear terms on data rights, bias mitigation, transparency and accountability.

Sophisticated agreements that reflect the usage of AI should contain provisions relating to the data used, including the data used to train models and for underlying model development. They should also delineate the use of the organization’s data to further AI deployment and development. Contracting parties may also wish to include terms relating to bias in AI models. This can include testing for bias, as well as detection of and responses to biased outcomes.

Transparency is a critical aspect of AI usage, ensuring that parties have insight into how AI systems function, make decisions, and impact outcomes. Including specific transparency provisions can help address concerns around understanding, accountability, and reliability in AI-driven processes. Finally, contracting parties may wish to include terms that allocate responsibility for monitoring and “human in the loop” decision making, where appropriate.

As norms surrounding data use rights, ownership, bias, and transparency continue to evolve, organizations are frequently reconsidering these terms — particularly in standard-form agreements that rarely adequately address the specific needs of AI contracting.

Looking beyond 2024, we anticipate commercial contracting will increasingly play a critical role in supporting sustainable AI innovation, including by drawing on standards and regulatory principles discussed earlier and applying them to specific use cases and organizational needs.

The future of AI oversight

As AI systems become more sophisticated and capable, managing the associated risks is essential. Standards, guidelines, regulations, and contracts are complementary tools that can de-risk the technology and unlock AI innovation. Standards provide foundational guidance, establishing best practices and benchmarks for quality and ethics. Regulations require that AI technologies are developed and deployed within safe and ethical boundaries. Contracts offer flexibility and tailoring for specific use cases, allowing organizations to define expectations and responsibilities in collaborative projects. By viewing these de-risking measures as frameworks for responsible innovation rather than obstacles, businesses can help generate confidence in and accelerate the development and deployment of AI technologies. Embracing proactive de-risking in this way, regardless of upcoming legislative timelines, will best position organizations to innovate with confidence and unlock the full potential of AI.