The Minister responsible for Canada’s Bill C-27, An Act to enact the Consumer Privacy Protection Act (CPPA), the Personal Information and Data Protection Tribunal Act and the Artificial Intelligence and Data Act (AIDA) and to make consequential and related amendments to other Acts, announced recently in Committee hearings that he would be proposing significant amendments to Bill C-27 that would make privacy a fundamental right, enhance the privacy rights of children, and give new powers to the regulator the power to include financial terms in compliance agreements.
Proposed amendments to AIDA would classify as “high impact” (and therefore subject to regulation) AI systems that: make employment determinations (including recruitment and remuneration); determine whether an individual receives services, the cost of such services or the priority of those services; moderate or promote online content; or are used in certain health technologies, law enforcement, or judicial processes.
Context
C-27 was referred to the Standing Committee on Industry and Technology for consideration before the Parliament rose for summer. The Committee commenced its study two weeks ago, with an initial appearance by the Innovation Minister François-Philippe Champagne. In his opening remarks, the Minister highlighted a number of areas in which he intended to make significant amendments to the CPPA and AIDA after consultation throughout the summer – including feedback from the Federal Privacy Commissioner. When pressed, the Minister said the amendments would be tabled during clause-by-clause review. This would be after the Committee will have met with witnesses and experts with a view to improving the Bill and – as some Committee members pointed out – of limited use.
Instead, the Committee demanded a copy of the proposed amendments, to ensure a meaningful review of C-27. Last week, Minister Champagne published a letter to the Committee, which featured “considerable detail” related to the amendments they would be looking to propose, however, he stopped short of providing the actual amendments. According to the Minister, the amendments are currently being drafted.
Key Takeaways:
The government intends to propose the following amendments to the CPPA:
• Establish privacy as a fundamental right;
• Enhance protections related to children’s privacy; and
• Allow the Federal Privacy Commissioner to include financial considerations in compliance agreements.
The government intends to propose the following amendments to AIDA:
• Defining classes of systems that would be considered high impact;
• Specifying distinct obligations for generative general-purpose AI systems, like ChatGPT;
• Clearly differentiating roles and obligations for actors in the AI value chain;
• Strengthening and clarifying the role of the proposed AI and Data Commissioner; and
• Aligning with the EU AI Act as well as other advanced economies.
Proposed CPPA amendments
Below is a summary of the most salient points of the Minister’s proposals to the privacy portion of Bill C-27.
Privacy to be a fundamental right: Amend the preamble to the CPPA to explicitly recognize a fundamental right to privacy for Canadians.
Children’s rights to be enhanced: Recognize and reinforce the protection afforded to children, by amending the preamble of the CPPA to include a specific reference to the special interests of children with respect to their personal information, and amending section 12, so that organizations must consider the special interests of minors when determining whether personal information is being collected, used or disclosed for an appropriate purpose.
Privacy Commissioner to have power to directly levy financial consequences: Amend the CPPA to permit that the terms of a compliance agreement may contain financial consideration — to address concerns that the Privacy Commissioner cannot levy a financial penalty on non-compliant organizations.
The proposed acknowledgement of privacy as a fundamental human right would move the CPPA closer to the European GDPR, and strengthen the rights of individuals in respect of their privacy. This could mean organizations will likely have a harder time justifying certain practices, and in terms of dispute resolution, find it more challenging to defend against allegations of breach privacy rights.
The inclusion of children’s privacy rights on this list of proposed amendments is not a surprise and reflects global developments. It will mean that the collection and use of children’s information will become more difficult, and carry increased risk.
Giving the Office of the Privacy Commissioner (“OPC“) the ability to add financial terms to compliance agreements would certainly strengthen the OPC’s powers. However, absent the checks and balances found in a true monetary penalty regime, this could end up being nothing more than a coercion tool in the hands of an organization which has not demonstrated rigour in its application of the law. [1]
Proposed AIDA amendments
Below is a summary of the most salient points of the Minister’s proposals to the artificial intelligence portion of Bill C-27.
Defining high impact systems: AIDA’s regulatory burden falls largely on “high impact systems”, which was undefined in the draft Bill and made it difficult to meaningfully assess what impacts the Bill would have.
Proposed amendments would clarify the meaning of high-impact systems as those for which at least one intended use may reasonably be concluded to fall within a list of classes to be set out in a schedule to the Act. The initial list of classes would be the those where AI systems are used for:
- Employment related determinations, including recruitment, and remuneration.
- Determination relating to (a) whether to provide services to an individual; (b) the type or cost of services to be provided; or (c) the prioritization of the services.
- The processing of biometric information in matters relating to (a) the identification of an individual, other than if the biometric information is processed with the individual’s consent to authenticate their identity; or (b) an individual’s behaviour or state of mind.
- (a) The moderation of content that is found on an online communications platform, including a search engine and a social media service; or (b) the prioritization of the presentation of such content.
- Health care or emergency services, excluding a use referred to in any of paragraphs (a) to (e) of the definition of “device” in section 2 of the Food and Drugs Act that is in relation to humans.
- Use by a court or administrative body in making a determination in respect of an individual who is a party to proceedings before the court or administrative body.
- To assist a peace officer, as defined in section 2 of the Criminal Code, in the exercise and performance of their law enforcement powers, duties and functions.
This would be supported by an amendment to allow the list to be modified by the Governor in Council as the technology evolves.
If these amendments listing high-impact systems are adopted, the current Section 7 would no longer apply and would be proposed for removal.
Aligning AIDA with the EU AI Act and the OECD by making targeted amendments to key definitions: The draft of the Bill was vague on what was caught by the concept of AI and as a result potentially had extremely broad reach.
Proposed amendments would align the Bill with the OECD definition of AI (a “technological system that, using a model, makes inferences in order to generate output, including predictions, recommendations or decisions”). Proposed amendment would replace current sections 8 and 9 of AIDA with new sections laying out the responsibilities of:
- Persons developing a machine learning model intended for high-impact use, who would now need to ensure that appropriate measures are taken before it goes on the market (either by itself or as part of a high-impact system);
- Persons placing on the market or putting into service a high-impact system, who would now be responsible for ensuring that necessary measures with regard to development were taken prior to the system entering the market.
- Persons managing the operations of a high-impact system would be responsible for ongoing obligations once the system is in use.
Any person who substantially modifies a high-impact or general-purpose system would be responsible for ensuring that certain pre-deployment requirements are met.
All persons conducting regulated activities would need to prepare an accountability framework consisting of:
- the roles and responsibilities and reporting structure for all personnel who support making the system available for use or who support the management of its operations;
- policies and procedures respecting the management of risks relating to the system;
- policies and procedures on how the personnel are to advise the person referred of serious incidents related to the system;
- policies and procedures respecting the data used by the system;
- the training that the personnel must receive in relation to the system and the training materials they are to be provided with; and
- anything else that is prescribed by regulation.
The framework would have to be provided to the Commissioner upon request, who would be able to provide guidance or make recommendations regarding corrective action.
Creating clearer obligations across the AI value chain: The Government is also proposing amendments that would clarify the obligations of different actors in the value chain, along the lines set out in the AIDA Companion Document, released in March 2023.
Developers of machine learning models intended for high-impact use would, before placing on the market or putting into service such a model, need to:
- Establish data governance measures;
- Establish measures to assess and mitigate risks of biased output.
Developers would, before placing on the market or putting into service a high-impact AI system, be required to:
- Perform an impact assessment;
- Establish measures to assess and mitigate risks of harm or biased output;
- Ensure that the system incorporates features enabling appropriate human oversight;
- Ensure the reliability and robustness of the system;
- Conduct testing;
- Prepare a manual for the person managing the operations;
- Comply with any other regulations made by the Governor in Council.
Persons making available a high-impact system would:
- Make the manual available to any person who is to manage the operations of the system; and
- Comply with any other regulations made by the Governor in Council.
Persons managing the operations of a high-impact system would need to:
- Establish measures to assess and mitigate risks of biased output;
- Conduct testing of the effectiveness of mitigation measures;
- Ensure appropriate human oversight;
- Publish a description of the system, including with regard to risks and mitigations;
- Report serious incidents to the developer and the Commissioner;
- Comply with any other regulations made by the Governor in Council.
Distinct obligations for general purpose AI systems: Proposed amendments would create distinct requirements for “general purpose” AI systems like ChatGPT.
Developers of such general purpose systems would, before placing them on the market or putting them into service, need to:
- Perform an impact assessment;
- Establish measures to assess and mitigate risks of biased output;
- Conduct testing of the effectiveness of mitigation measures;
- Prepare a plain-language description of the capabilities and limitations of the system, as well as the risks and mitigation measures taken;
- Comply with any other regulations made by the Governor in Council.
Persons making available general-purpose systems would:
- Provide the plain language description to users of the system; and
- If the system is available to the public, publish the description.
Managers of the operations of general-purpose systems would, in accordance with any regulations:
- monitor for any use of the system that could result in a risk of harm or biased output;
- take measures necessary to mitigate the risks;
- report serious incidents to the developer and the Commissioner; and
- Comply with any other regulations made by the Governor in Council.
Detecting AI-generated content: Proposed amendments would also attempt to ensure that AI-generated content can be identified by Canadians:
- If there is a reasonably foreseeable risk that an individual communicating with a system could believe that it is human, the person managing the operations of that system must inform the individual that it is not; and
- Persons developing general-purpose systems that produce text or audio-visual content must make best efforts to ensure that it can be readily identified.
Strengthening and clarifying the role of the AI and Data Commissioner: Finally, the Government is proposing amendments to clarify the functions and roles of the AI and Data Commissioner.
[1] See, for instance, the commentary of the Court at paras 72-78 in the recent case, Canada (Privacy Commissioner) v. Facebook, Inc., 2023 FC 533 (CanLII).