By now, many of us have heard of the U.S. lawyers who have found themselves in hot water as a result of their reliance on the generative artificial intelligence (AI) tool known as ChatGPT. For example, New York lawyers used ChatGPT to help them research case law, and filed a court brief that relied on legal precedents that were entirely invented by ChatGPT. The lawyers allegedly had no idea that ChatGPT could create fabricated precedents, and they failed to take appropriate action to confirm its sources. In June 2023, a District Judge found that these lawyers “abandoned their responsibilities when they submitted non-existent judicial opinions with fake quotes and citations […] then continued to stand by the fake opinions after judicial orders called their existence into question.” The District Judge imposed sanctions on the lawyers including a penalty of $5,000 USD.
ChatGPT’s false precedents have now made their way into Canadian courts. In a recent civil matter in the B.C. Supreme Court, opposing counsel discovered that the other lawyer submitted fabricated case law. Like the U.S. lawyers, apparently the Canadian lawyer was unaware that AI chatbots such as ChatGPT can be misleading or incorrect, and this Canadian lawyer failed to double-check her work. According to media reports, the lawyer is currently facing an investigation by the Law Society of British Columbia, and the opposing lawyers in the case she was litigating are also suing her personally for special costs, arguing they should be compensated for the work necessary to uncover the fact that bogus cases were almost entered into the legal record.
Canadian courts and Canadian law societies are regulating or warning lawyers about the use of generative AI. For example, counsel, parties, and interveners in legal proceedings at Canada’s Federal Court must make a declaration for AI-generated content (e.g., “Artificial intelligence (AI) was used to generate content in this document”) and must consider certain principles when using AI to prepare documentation filed with the Federal Court. Similarly, in June 2023, the Court of King’s Bench of Manitoba and the Supreme Court of Yukon each issued a practice direction regarding the use of AI in court submissions, stating that “there are legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence”. Accordingly, the Court of King’s Bench of Manitoba now requires that, when AI is used in the preparation of materials filed with the court, such materials must indicate how the AI was used. The Supreme Court of Yukon now requires that any lawyer or party who relies on AI for their legal research or submissions before the Court advise the Court of the AI tool used and the purpose of such use.
Even BC, where the incident occurred, has published guidance for lawyers on the use of generative AI.
ChatGPT’s shortcomings, as highlighted in the U.S. legal system, were a global hot topic in 2023, and now this sister incident in B.C. should serve a wake-up call for Canadian lawyers and other users of generative AI. It also highlights the need for continued legal education and oversight with respect to the use of AI. Although AI has proven to be a useful tool in many circumstances, this technology should not be blindly relied upon, and users of generative AI must proceed with caution.