Authors
Partner, Technology, Toronto
Partner, Employment and Labour, Toronto
Articling Student, Toronto
On March 21, 2024, the Working for Workers Four Act, 2024 (Bill 149) received Royal Assent. Bill 149 amended the Employment Standards Act, 2000 (ESA) to include new job posting requirements for employers. A previous Osler Update, “‘Working for workers’ means more work for employers,” outlined these requirements, including the obligation employers will have to disclose their use of artificial intelligence (AI) to screen, select and assess applicants in the hiring process for publicly advertised job postings.
The definition of “artificial intelligence” initially proposed under Bill 149 was as follows:
“A machine-based system that, for explicit or implicit objectives, infers from the input it receives in order to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments.”
New regulation under the ESA
Following the Ontario Ministry of Labour, Immigration, Training and Skills Development (the Ministry) requesting public input on the new job posting requirements, O. Reg. 476/24 (the Regulation) was issued. The Regulation confirms the definition of “artificial intelligence” as initially proposed.
The Regulation also confirms the definition of a “publicly advertised job posting” as “an external job posting that an employer or a person acting on behalf of an employer advertises to the general public in any manner…” However, several types of job postings, including general recruitment efforts and internal hiring initiatives are excluded; and employers are carved-out from the requirement if they have fewer than 25 employees on the day a publicly advertised job posting is posted.
All of the above are set to come into force on January 1, 2026.
Implications of the regulation
As highlighted in our previous Osler Update, referred to above, absent further clarification or guidance from the Ministry, the definition of “artificial intelligence” under Bill 149 is complex and leaves room for inconsistent interpretation. Specifically, employers may face challenges even identifying what constitutes “artificial intelligence,” which could range from simple keyword algorithms to machine learning algorithms that assess the probability of candidate success. This type of overly complex and broad definition could lead to either over-disclosure, potentially deterring candidates or inviting unnecessary inquiries; or inadvertent failure to disclose AI usage, leading to potential non-compliance and reputational risks.
Employers may also struggle to interpret key terms such as “screen,” “assess” or “select.” For example, query whether “select” includes scenarios where a human ultimately makes the hiring decision if the decision is supported by typical suites of programs or systems, many of which are becoming increasingly infused with AI. Without further guidance, it may be difficult for employers to determine when disclosure is actually required.
In an effort to avoid the ambiguity, some employers may conclude that simply forgoing any use of AI to “screen, assess or select applicants” for a role is a viable strategy. However, in our experience advising clients on the integration of AI into their HR-related workflows, this may not be realistic for most businesses. This is because AI is being increasingly relied upon to improve efficiencies in HR workflows. We instead foresee it as more likely that employers will settle on standard AI-related disclosure language and include such language in all publicly advertised job postings, regardless of whether the technology’s use is captured by the definition in each case. While this approach may mitigate practical challenges with compliance, it may also dilute the intent of the regulation.
The Ministry has said that the disclosure requirement was introduced to help workers make informed decisions in their career search. However, it is unclear how a disclosure statement alone furthers this goal, especially if the statements themselves become “boilerplate” in nature. For example, an employer who uses AI for basic keyword screening could, in practice, have the same disclosure statement in a job posting as an employer who uses the technology to predict a candidate’s potential success in a role. This prevents workers from distinguishing employers based on the degree of influence AI has in their hiring process, potentially undermining the intent regarding transparency this Regulation aims to achieve.
In addition, questions remain regarding the sufficiency of the disclosure, e.g. what level of detail is required for compliance? Does the statement that AI was used suffice, or could the new rules require employers to outline specific functions AI performs, along with the limitations and risks so that the applicant can appreciate the ramifications? If it is the latter, employers may need to contend with how to communicate these limitations and risks in a manner that can be readily understood.
To prepare for compliance, training for HR is essential. HR teams may need to be trained to:
- identify AI actually used to assess, screen and select applicants
- craft transparent and accurate disclosure that meets regulatory requirements as well as respond to inquiries about such disclosure
- ensure they have a voice in AI governance frameworks and procedures supporting the responsible use of AI
While the disclosure requirement, without more guidance or case law interpretation, may not initially impose significant new obligations on employers posting for positions in Ontario, we can expect further regulation in this area as policymakers are growing increasingly focused on incentivizing the ethical use of AI in the workplace. Employers should stay abreast of these developments, keep up with regulatory changes, and start taking proactive steps to understand how these requirements can apply to their hiring processes.