Organizations implementing artificial intelligence (“AI”)[1] products and services that use personal information are currently working in a vacuum, with no definitive standards or frameworks to guide them.[2]
However, the Information and Privacy Commissioner of Ontario (“IPC”) recently considered a university’s use of AI in processing sensitive personal information, and in so doing provided recommendations for ensuring the privacy-protective adoption of such technologies. The IPC’s Privacy Complaint Report PI21-00001 (“Report”) is focused on public sector institutions’ obligations under Ontario’s public sector privacy law – the Freedom of Information and Protection of Privacy Act (“FIPPA”). However, the Commissioner’s advice and recommendations may be a valuable resource for organizations seeking to implement AI systems in a privacy-protective manner more broadly, regardless of jurisdiction.
Privacy Complaint Report PI21-00001
The Report pertains to a Commissioner-initiated complaint under FIPPA addressing McMaster University’s (“University”) use of an AI-powered exam-proctoring software called Respondus, used for remote/virtual examinations. Respondus consists of two programs, which are often used together: one program locks down various functions on students’ computers for the duration of a test to discourage cheating, and the second accesses students’ webcams to record audio and visual information while a test is taking place. The recorded data is then analyzed using AI to identify activities that Respondus deems suspicious, and is used to inform academic integrity issues.
In the Report, the IPC noted with approval the steps the University took prior to implementing the Respondus software during the height of the COVID-19 pandemic, including:
- conducting a privacy impact assessment and two risks assessments;
- running a pilot project to evaluate online proctoring options;
- reviewing the vendor’s policies, protocols, and terms of use; and
- developing its own policies and related documents regarding the University’s adoption of the program.
The IPC also concluded that the University’s collection of personal information via Respondus, including sensitive biometric information, was necessary for a lawful purpose – being the need to conduct and proctor exams – and therefore authorized under FIPPA, and that the use of the personal information for that purpose was similarly lawful.
However, the the IPC found that the University ultimately infringed upon students’ privacy because of the following:
- The spirit and intent of the notice requirements under FIPPA were not satisfied by the University’s disparate sources of information regarding Respondus;
- The AI vendor’s use of sensitive personal information for its own commercial interests (i.e. improvement purposes) was unlawful, as it went beyond the University’s lawfully authorized purpose of conducting and proctoring exams, was not a consistent purpose, and would not reasonably have been expected by the students’ to whom the personal information relates; and
- The contractual and oversight arrangements between the AI vendor and the University did not adequately address the University’s obligation to protect students’ personal information under FIPPA.
The IPC’s Recommendations regarding Privacy-Protective AI Use
The Commissioner made the following recommendations to bring the University’s use of Respondus into alignment with its obligations under FIPPA:[3]
1. Adequate notice of collection, use and disclosure of personal information
- The University should consolidate its notice of personal information processing via Respondus in a clear and comprehensive statement, either in a single document or with clear cross-references to other related sources of information.
2. Contractual protections or other written undertakings [4]
The Commissioner recommended that the University enter into an agreement with the AI vendor/service provider that specifically addresses the following:
Ownership of data
- Ensure that the University maintains ownership over the individuals’ personal information held by the AI vendor
Confidential information
- Require the AI vendor to treat all personal information as personal information (i.e. not permitting the AI vendor to “de-identify” the data and then use it for service improvement purposes)
Collection, use and disclosure
- Require the AI vendor to collect no more personal information than is necessary for the identified purposes or as required by law
- Limit the AI vendor’s use of personal information to the identified purpose (here, for proctoring exams only, and prohibiting the use of individuals’ personal information for research, training, and system improvement purposes, absent the individuals’ consent)
- Prohibit the disclosure of personal information for purposes of improving the AI vendor’s systems, or for similar purposes, absent consent
Notice of compelled disclosure
- Require the AI vendor to provide notice to the University of any compelled disclosure of personal information, such as to government or law enforcement authorities, and to allow the University an opportunity to seek appropriate remedies to prevent or limit such disclosure
- Require the AI vendor the disclose only the personal information that is legally compelled to be disclosed, and no more
Subcontracting
- If the data processing agreement permits subcontracting by the AI vendor, limit the purposes for which subcontractors can process the personal information (i.e. prohibit the use of individuals’ personal information for system improvement and research purposes)
Security
- Establish a requirement that the AI vendor implement appropriate technical, organizational and physical safeguards to protect the personal information from unauthorized processing, having regard to the sensitivity of the information
- Require AI vendor to notify the University in the event of a breach
Audits
- Provide the University with audit rights and impose record-keeping requirements on the AI vendor
Retention and destruction
- Require the AI vendor to delete personal information once it is no longer required and in accordance with established retention periods, and provide the University with proof of deletion
3. Additional risk mitigation strategies[5]
The above recommendations arose from the University’s obligations under FIPPA.
While the Commissioner acknowledged that there is currently no binding law or policy governing the use of AI in Ontario’s public sector, but said that in the absence of such legislation or framework, she recommended the following additional guardrails for protecting personal information being process by AI systems:
Due diligence
- Engage in a thorough due diligence process to assess the AI vendor’s privacy protective mechanisms and determine whether less privacy invasive options are available that would achieve the same desired outcomes
- Conduct privacy impact assessments and algorithmic impact assessments to assess the level of risks associated with the University’s use of the AI system, and to identify methods of mitigating those risks and potential impacts
Consultation with experts and affected communities
- Consult with affected parties and those with relevant expertise prior to AI adoption and regularly thereafter
Broader opportunity for individuals to opt out
- Provide arrangements for those with apprehensions about AI-enabled software and the significant impacts it may have on their personal information (in addition to the accommodations that are available for individuals based on protected human rights grounds)
Human supervision and the ability to challenge the results
- Ensure that the ultimate responsibility for ensuring accuracy of data and the validity of decisions made based on AI generated information rests with a human – i.e. here, the course instructor
- Provide individuals with a mechanism for challenging flags in the AI system without having to invoke the University’s academic integrity process, and the right to have the information corrected and flag removed, if warranted
Use of vendors
- Ensure that the AI vendor has designed the AI system in manner that ensures safe and accurate results, accounts for privacy and security concerns, and mitigates against potential bias and discriminatory impacts
- Ensure the data that an AI vendor used to train its AI system was obtained in compliance with Canadian laws
- Have reciprocal obligations to notify the other party of potential inappropriate uses, biased outcomes, or other vulnerabilities in the AI system
Takeaways
Privacy Complaint Report PI21-00001 is noteworthy as the Commissioner’s findings and recommendations are a useful resource regarding the due diligence process and governance structures that should be in place when organizations engage AI tools to process personal information. While the Report is grounded in the IPC’s jurisdiction under FIPPA, the recommendations flow from privacy-protective principles more generally, and will be useful for organizations regardless of whether they are covered by private or public sector privacy legislation.
This Report clearly demonstrates that while organizations can outsource data processing functions, including through the use of AI systems, they cannot outsource their responsibilities under privacy protection legislation. Organizations that currently use AI systems to process personal information should consider whether they have adequate privacy protective mechanisms in place, in light of the recommendations discussed above.
For more information on privacy policies and breaches, please reach out to Jaime Cardy or any member of Dentons’ Privacy and Cybersecurity group.
[1] While there is no uniform definition of “artificial intelligence”, the definition in the now enacted European Union AI Act is, “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments” (article 3).
[2] Artificial intelligence technologies have seen rapid growth over the past few years, and yet, there are no laws governing the use of AI in the public or private sectors within Canada. While organizations appreciate that their adoption of new technologies should be ethical and defensible, there are few signposts directing them how to do so, and even less clarity regarding the consequences that may result from failing to meet those elusive standards. Legislators are keen to address AI, as evidenced by Bill C-27 at the federal level, which proposes to enact the Artificial Intelligence and Data Act to regulate the use of AI systems in the private sector, and Bill 194 at the provincial level, which proposes to regulate AI in certain “vulnerable” public sectors in Ontario under the Enhancing Digital Security and Trust Act, 2024. The governments of Ontario and Canada are also working on frameworks to inform the use of AI technologies in their respective public sectors, and the Innovation Council of Quebec has recommended the adoption by the provincial legislature of a law to regulate AI. However, the legislative process takes months (if not years), and technology adoption occurs at a comparatively swift pace. Organizations can find some guidance from Canadian privacy regulators and within personal privacy legislation – at least to the extent that their AI solutions involve processing personal information. For example, in December 2023, Canada’s federal, provincial, and territorial privacy commissioners teamed up to launch the Principles for responsible, trustworthy and privacy-protective generative AI technologies, which is a framework intended to assist organizations developing, providing, or using generative AI. While helpful, the principles are limited in scope to generative AI technologies.
[3] The recommendations are in addition to initially grounding the proposed collections, uses and disclosures of personal information on consent or another lawful purpose.
[4] Some of these requirements were adequately addressed in the agreements entered into between the University and Respondus; however, they have all been included in this list for ease of reference.
[5] The Commissioner found that some of these recommendations were already satisfied by the University; however, they have all been included in this list for ease of reference.