As AI tools like ChatGPT and Copilot become part of everyday work, many people (clients and lawyers) may instinctively turn to them for a quick answer, including to a legal question.
STOP!
Apart from the fact AI tools often get it wrong, it’s absolutely critical to understand that, unlike a confidential communication between a client and lawyer which is protected by solicitor-client privilege, a chat with AI is not protected by privilege and may be fair game for disclosure in a legal proceeding. This means you should never share sensitive, confidential or incriminating details you wouldn’t want to be made public or to fall into the hands of a third party.
To date, no Canadian reported decision has considered how using AI for legal advice impacts privilege. However, we expect a Canadian court’s analysis to be similar to a recent American decision in which the court decided that communication with a public AI chatbot was not protected by privilege and, further, by uploading an already privileged document or communication into the public AI chatbot, that privilege may be waived.
What happened in the American case?
Heppner, a financial services executive, was charged with fraud. He retained legal counsel but also used the AI chatbot, Claude, to research legal questions related to his case. He gave Claude information he learned from his lawyers, asked Claude questions, and then sent copies of those discussions to his lawyers. The FBI later seized copies of these printed documents from Heppner’s home. [1]
Heppner asserted attorney-client privilege (solicitor-client privilege in Canada), and a judge ruled the seized documents were neither privileged nor protected.
Solicitor-client privilege
Solicitor-client privilege has existed for centuries in common law countries such as Canada and the United States. To establish a communication is privileged, a person must show it was:
- between a lawyer and a client;
- for the purpose of seeking or giving legal advice; and
- intended to be confidential.
The American court found the AI generated documents failed this test because:
- No lawyer was involved. Claude AI was not a lawyer, and as such it could not form a solicitor-client relationship.
- Claude could not give legal advice. Not only was Claude not a lawyer, but its disclaimer explicitly stated Claude could not give legal advice. Sharing the AI output with a lawyer, after the fact, could not retroactively make it privileged.
- The use of Claude was not confidential. Aside from the fact Claude was not bound by a duty of confidentiality, as would be a lawyer, there could be no reasonable expectation of confidentiality when chatting with a public AI chatbot. Moreover, Claude’s policy expressly stated that user prompts and outputs may be disclosed to “governmental regulatory authorities” and used to train the AI model. Indeed, the sharing of information inputted into it, is precisely how AI tools learn.
Litigation privilege
In addition to solicitor-client privilege, in Canada, we also have “litigation privilege,” which protects a document created for “the dominant purpose of litigation.” This is a broader protection than the American “work product doctrine,” which requires attorney direction to create the document. The American court found work product doctrine did not apply because an attorney was not involved in making the documents. It remains to be seen how a Canadian court might treat litigation privilege in a similar circumstance, though there is still the issue of there being no reasonable expectation of confidentiality when using public AI.
Waiver of privilege
Even when privilege exists, a party may waive privilege, inadvertently, if it does not keep privileged communications and documents confidential. The American judge found when Heppner gave privileged communications and documents to Claude, Heppner waived privilege over any attorney-client privilege that may have existed. This is likely to be the case in Canada, at least with respect to public AI.
What this means for you right now
Can communication with an AI chatbot be considered privileged communication? No. An AI chatbot is not a lawyer.
Can one lose privilege over communications or documents if one inputs them into an AI chatbot? Yes. This may result in a waiver of privilege, especially with a public AI tool.
If one disables history on the AI chatbot, does that make it safe for use? No. Disabling history may reduce the risk but prompts and chats are still retained and used by the software.
What about the use of a private AI chatbot that keeps communications confidential? These tools may improve security but they cannot establish legal privilege.
What can we do to protect our organization?
Consider the following best practices:
- Create, update, and enforce a robust AI policy that:
- clearly sets out the dos and don’ts of AI use
- expressly addresses the risks related to privilege and waiver
- sets out the ramifications for a breach of the policy.
- Train (and retrain) any member of the organization with access to privileged or confidential information.
- If you wish to use AI for legal research, consider only using it for generic prompts (do not input facts or confidential information).
- Think very carefully before allowing any legal advice you receive (if you are a client) or give (if you are a lawyer) to be inputted into AI or transcribed using AI.[2]
Need help? We’re happy to assist you navigate this new technological frontier.
Anja Kohlman Sawa is a lawyer with Sherrard Kuzz LLP, one of Canada’s leading employment and labour law firms, representing employers. Anja can be reached at 416.603.0700 (Main), 416.420.0738 (24 Hour) or by visiting www.sherrardkuzz.com.
The information contained in this presentation/article is provided for general information purposes only and does not constitute legal or other professional advice, nor does accessing this information create a lawyer-client relationship. This presentation/article is current as of April 2026 and applies only to Ontario, Canada, or such other laws of Canada as expressly indicated. Information about the law is checked for legal accuracy as at the date the presentation/article is prepared but may become outdated as laws or policies change. For clarification or for legal or other professional assistance please contact Sherrard Kuzz LLP.
[1] United States v Heppner 1:25-cr-00503-JSR ECF No. 27 (filed February 17, 2026).
[2] See our August 5, 2025 article, Think before hitting ‘transcribe’.