Other Blogs
Check out other Legal AI Posts
New California Bar rules guide lawyers on using generative AI while protecting client data, ethics, and billing transparency.
The rise of generative AI in the legal field is changing how lawyers work, but it also raises serious questions about ethics, confidentiality, and client protection. In May 2025, the State Bar of California released a practical guidance document to help lawyers understand how to responsibly use GenAI in legal practice.
This guidance doesn’t introduce new laws. Instead, it explains how existing rules of professional conduct apply when AI tools are used in a legal setting. Whether you're a solo attorney, a law firm, or a user of legal tech like ProPlaintiff.ai, understanding these guidelines is essential to protect both professional integrity and client rights.
Here’s what every legal professional and tech user needs to know.
Under the duty of competence, lawyers must understand the tools they use, including generative AI. That means they should:
If an attorney uses AI to draft documents or analyze information, they are still responsible for the outcome. GenAI is a powerful tool, not a replacement for legal judgment.
One major concern: GenAI tools may send data to third-party servers, including cloud services outside the lawyer’s control. This could violate client confidentiality if sensitive information is shared.
Lawyers are expected to:
Confidentiality isn’t optional. Even a helpful tool like AI must be used in a way that safeguards client trust.
AI isn’t a shortcut around responsibility. Attorneys must supervise:
Lawyers can’t blame the tool for bad results. They have a duty to make sure everyone involved in their practice - including automated systems - meets ethical and professional standards.
The State Bar makes this point crystal clear: Only lawyers can give legal advice.
Using GenAI to generate arguments or answer legal questions is fine, as long as the lawyer is reviewing and making final decisions. Letting a machine advise a client directly? That’s crossing the line into the unauthorized practice of law.
Lawyers must tell clients when GenAI is involved, especially if it affects:
A client has the right to know how their case is being handled, including whether AI helped write their legal documents. Clear communication is part of informed consent.
AI can hallucinate. It can generate plausible-sounding but false information. That’s a big problem in law, where accuracy is everything.
Lawyers are required to:
There have already been high-profile cases where lawyers faced sanctions for submitting AI-generated, false citations. The lesson: always verify.
Efficiency is one of GenAI’s biggest benefits—but lawyers can’t use it to justify inflated billing.
The California Bar emphasizes that:
Want a deeper dive into this topic? Check out our related ethics blog: Why Lawyers Don’t Have to Lower Fees for Faster Work
If AI is used to create content or interact with potential clients, it still must follow advertising rules.
That includes:
This is a growing issue across jurisdictions. For example, see how the Florida Bar is now officially approving GenAI use—with guardrails.
The message from California’s State Bar is simple: generative AI can help, but it doesn’t replace the lawyer's duty, ethics, or professional responsibility. Anyone using legal tech needs to ensure the tools serve the client, not the other way around.
At ProPlaintiff.ai, we take these concerns seriously. Our platform is built with data security and confidentiality at the core:
We never store or transmit identifiable case data without encryption, and we never share user inputs with third-party large language models. Whether you're preparing a demand letter or organizing discovery, you can trust that your data stays safe with us.
Want to learn more about how ProPlaintiff.ai empowers modern legal professionals while keeping clients protected? Contact us or try our platform risk-free.
Check out other Legal AI Posts