Florida Bar Approves Lawyers' Use of Generative AI — Ethical Guidelines

A new ethics opinion from the Florida Bar confirms that attorneys may ethically use generative AI tools like those offered by ProPlaintiff.ai, provided they maintain competence, supervise outputs, protect confidentiality, and ensure billing transparency. The opinion encourages responsible innovation while warning against potential risks. This blog explains the key takeaways from the opinion, outlines the four major ethical duties lawyers must uphold when using AI, and offers practical steps for integrating AI solutions into law practice ethically. Learn how ProPlaintiff.ai can help plaintiff firms leverage AI effectively and compliantly.

Florida Bar Ethics Opinion Approves Lawyers' Use of Generative AI — With Important Caveats

A new advisory opinion from the Florida Bar confirms that lawyers may ethically use generative AI, provided they continue to meet all professional obligations. Issued by the Florida Bar’s Board Review Committee on Professional Ethics, the opinion offers guidance for attorneys seeking to integrate AI tools while remaining compliant with longstanding ethical standards.

This guidance arrives as more legal professionals embrace AI tools like ChatGPT, automated research platforms, and document drafting assistants. While the Florida Bar acknowledges the advantages these tools can bring, it stresses that careful oversight is critical to avoid ethical missteps.

The Background: Why the Florida Bar Issued This Opinion

As artificial intelligence becomes increasingly embedded in legal practice, concerns have grown around competence, confidentiality, and supervision. To address these issues, the Florida Bar released Proposed Advisory Opinion 24-1, clarifying how generative AI fits into the ethical framework.

This new guidance builds upon an earlier advisory letter and reflects the Florida Bar’s recognition of AI’s growing role in client service. The opinion emphasizes that while AI can be valuable, it must be used responsibly.

To stay updated on developments like this, visit our ProPlaintiff.ai News section.

Key Takeaways from the Florida Bar’s Advisory Opinion 24-1

The main takeaway is that lawyers are permitted to use generative AI tools but must do so carefully, ensuring full compliance with ethical duties.

Highlights from the opinion include:

  • Lawyers must maintain competence by understanding how AI tools function and recognizing their limitations.

  • They are required to supervise nonlawyer assistants — a category that now extends to AI technologies.

  • Confidentiality must be preserved, particularly when using cloud-based AI systems.

  • Clients should be informed when AI usage materially affects their case.

  • Billing must be transparent, fair, and accurately reflect how AI-assisted work is handled.

Ultimately, attorneys remain fully accountable for all work products, whether or not AI was involved.

For firms looking to integrate ethical AI into their practice, ProPlaintiff.ai can provide customized solutions and support.

The Main Ethical Duties Lawyers Must Uphold When Using Generative AI

The Florida Bar highlights four core ethical areas attorneys must be vigilant about when working with generative AI.

1. Competence and Understanding of AI Tools

Lawyers are expected to stay technologically competent under existing ethics rules. This includes understanding the strengths and weaknesses of any AI tools they rely on. Blindly accepting AI-generated outputs without careful review could lead to malpractice or ethical violations.

2. Confidentiality and Data Security

When using AI applications, particularly those that are cloud-based, attorneys must ensure that client information remains confidential. The Florida Bar stresses that lawyers should thoroughly evaluate the data security measures of any AI service providers they use.

3. Supervision of AI Systems as Nonlawyer Assistants

Ethical rules require lawyers to supervise nonlawyer personnel. According to the Florida Bar, this duty extends to AI systems used for research, drafting, or analysis. Lawyers must review AI outputs just as they would review a paralegal’s work.

4. Billing Transparency and Honesty

When billing clients, lawyers must be truthful about how AI tools are utilized. Overcharging for AI-assisted work or misleading clients about the extent of human involvement can lead to serious ethical breaches. Proper disclosure is especially important when AI materially impacts the client’s matter.

Why This Opinion Matters for Lawyers and Law Firms

The Florida Bar’s advisory opinion offers clear guidance at a time when AI’s influence on legal practice is expanding rapidly. It provides a pathway for lawyers to use generative AI tools like those offered by ProPlaintiff.ai without compromising ethical standards.

Attorneys who embrace AI responsibly can enhance their efficiency and client service. Those who fail to adapt or ignore ethical requirements risk client dissatisfaction — or worse, disciplinary action.

For plaintiff firms ready to responsibly integrate AI into their practice, ProPlaintiff.ai offers tools specifically designed to enhance case preparation, legal research, and document automation, all while adhering to ethical obligations. Contact us today to learn how we can help.

FAQs

What is generative AI in legal practice?

Generative AI refers to software that can create text, draft legal documents, summarize case law, and assist with legal research based on user prompts.

Is it ethical for lawyers in Florida to use generative AI like ChatGPT?

Yes. The Florida Bar’s new ethics opinion confirms that lawyers can ethically use generative AI, as long as they maintain competence, confidentiality, supervision, and billing transparency.

Do lawyers have to tell clients when they use AI?

In cases where AI significantly impacts a client’s matter, attorneys must inform their clients about its use.

What risks do lawyers face if they use AI improperly?

Potential risks include exposing confidential client information, relying on inaccurate AI outputs, billing fraud, and failure to supervise the use of AI tools properly.