The integration of artificial intelligence (AI) into the workplace has the potential to alter both individual work relationships and collective dynamics. On January 27, 2025, the Québec privacy regulator (Commission d’accès à l’information or CAI) submitted its brief regarding AI in the workplace to the Québec Ministère du Travail.
AI offers advantages in terms of productivity and organization, but it also brings major challenges related to privacy and employment law. Faced with these challenges, the CAI makes recommendations regarding the use of AI at work and encourages the establishment of a precise regulatory framework for AI in the workplace to ensure its use is transparent and respectful of workers’ fundamental rights.
This brief reflects the CAI’s interpretation of the law and its views on preferred practices.
CAI recommendations on the use of AI in the workplace
The CAI proposes solutions that it regards as helpful to ensure compliance with privacy laws when implementing AI-based technologies in the workplace.
- Publishing an internal policy on the use of AI: Employers should consider publishing an internal policy that explains in detail the use of AI technologies specifying, in particular:
- The name of the systems used and the name of the suppliers, where applicable;
- The purposes pursued;
- The personal information implicated, both used and generated by AI systems;
- The expected impact on the rights of data subjects (including workers) and applicable risk mitigation measures;
- How the results are produced (which indicators are selected, what are the main factors and parameters taken into account);
- How the results will be used in decision-making;
- The exercise of rights in relation to these technologies; and
- Audits and evaluations completed by employers.
It is likely that most employers will not be making available audits and evaluations themselves available to employees (in part because they are often subject to confidentiality provisions), employers could simply confirm the types of audit or certification performed (e.g. ISO, etc.).
Although the CAI does not directly address the issue of protecting trade secrets and confidential company information in its submission, it is important to strike a balance between transparency and the preservation of commercial interests, particularly in order to avoid excessive disclosure that could harm the company’s competitiveness.
- Notifying employees: Employers should inform their employees as soon as they plan to use partially or fully automated decisions. They should not wait until the automated decision is being made. We note that the Quebec privacy law itself requires that this be done only for fully automated decision making systems.
- Improving your Privacy Impact Assessment (PIA): In addition to conducting a PIA on the development or deployment of an AI system involving the collection, use, disclosure, retention or destruction of personal information, employers should enhance their existing assessment to include an algorithmic impact analysis.
- Carrying out an algorithmic impact analysis: An algorithmic impact analysis should be carried out to assess the effects of AI systems that make partially or fully automated decisions on employees’ fundamental rights and privacy. This analysis could include consultations with employees to hear their concerns and ensure a comprehensive risk assessment.
- Restricting the use of AI: Avoid the use of AI for unacceptable purposes, including the analysis of emotions or psychological states, biometric categorization and fully automated decisions with a significant impact on employees. In this respect, the CAI believes that European Regulation (EU) 2024/1689, which lays down harmonized rules on artificial intelligence, can serve as inspiration in the analysis of prohibited AI practices.
- Ensuring access to data: Employers must put in place mechanisms enabling employees to access their personal information used and generated by AI systems. Note that insights or predictions generated by AI are considered personal information and are subject to this right.
- Demonstrate transparency: The CAI suggests proactive transparency by employers about the technologies employed and their potential impact.
- Involving employees: The CAI also suggests involving employees in decisions concerning the use of AI in the workplace (e.g., employee committee).
- Assessing the relevance, necessity and impact of AI: Employers should consider conducting regular assessments to ensure the relevance, necessity and impact of deployed AI systems and/or to ensure vendors of such systems commit to this.
- Provide training: Along with any deployment of AI, employers will need to train managers and employees on legal and ethical issues related to AI and privacy in the workplace.
Warnings from the CAI
The CAI points out that the rise of surveillance technologies and artificial intelligence systems in the workplace raises major privacy issues. It warns of the scale of the personal information collected and the risks associated with its use, and stresses the need for rigorous oversight.
Among these technologies, the CAI focuses on the use of biometric devices, such as facial recognition and fingerprints, which involve the collection of sensitive personal information. It also highlights the risks associated with employee monitoring software, capable of analyzing their activity in real time, as well as geolocation systems, which can lead to constant tracking of movements. Finally, it warns of the potential abuses of video surveillance, particularly when these tools are coupled with AI systems capable of analyzing employee behaviour or performance.
Faced with these challenges, the CAI insists on the importance of respecting the principles of necessity and proportionality in order to ensure a balance between the objectives pursued and the protection of employees’ rights.
Training AI using employee information
Furthermore, the CAI points out that companies (or their vendors) often use employees’ personal information to train AI models in various areas of human resources, including for:
- Automating recruitment and candidate selection processes;
- Performance evaluation and behavioural trend detection;
- Optimizing schedules and task allocation; and
- Making disciplinary decisions based on the analysis of collected data.
The use of AI in these contexts is not without risk, but remains possible. According to the CAI, training models on massive data can lead to discriminatory biases, increased surveillance and opaque decision‑making. The CAI therefore insists on the need for a more rigorous framework to ensure that the exploitation of such data respects the principles of necessity, proportionality and transparency.
Employers should directly inquire of vendors whether they will be using employee information to train their AI models and, if so, take appropriate steps to ensure such vendors comply with privacy laws and do not imperil the employer’s compliance with privacy laws.
In the workplace, the use of AI is directly at the intersection between employment law and the right to privacy.
Employers must also ensure that the use of AI complies with current employment policies, individual employment contracts and the provisions of collective agreements, where applicable. In addition, such use must comply with the fundamental rights protected by the Charter of Human Rights and Freedoms, notably with regard to privacy, dignity and equality of employees, as well as the right to fair and reasonable working conditions. Any use of AI that contravenes these legal frameworks could expose the employer to legal recourse or union challenges.
A clear legal framework for AI
Despite the recent reform of Québec’s privacy laws, the CAI is of the view that workplace and employment contexts may present some challenges in applying the various principles developed by these laws. According to the CAI, existing laws lay a solid foundation, but they should be updated to meet new technological realities, including AI. In the meantime, companies should consider: i) the necessity, proportionality and transparency of their use of AI; ii) the consent (implicit or explicit, depending on the circumstances) of the employees concerned; iii) the right of the employees to exercise their rights, including access and rectification rights and right to withdraw consent; and iv) the use of personal information for specific AI-related purposes.
Conclusion
Although the brief only represents recommendations issued by the CAI, it remains essential to take them into account to ensure AI system compliance throughout its lifecycle. Given the investments required and the intersection of multiple issues, early planning and a proactive approach are crucial to ensure optimal use of AI, particularly in the workplace.
For more information on this topic, please contact Arianne Bouchard, Alexandra Quigley or Charles Giroux, or a member of the Privacy and Cybersecurity and Employment groups in Canada.