You may have read about the lawyers who used generative AI to help them draft briefs – briefs that contained cites to cases that didn’t exist. Perhaps in response to these stories and the general explosion of generative AI, the State Bar of California issued guidance addressing lawyers’ ethical obligations while using AI tools.
The Bar’s guidance does not outright discourage the use of AI but rather provides guardrails and considerations for lawyers and law firms. Those guardrails include:
- Confidentiality: Lawyers should not input confidential information into generative AI software unless it has been redacted and the AI’s terms of use have been reviewed.
- Reliance on AI tools: Lawyers should not over-rely on AI software, and carefully review any outputs from AI tools for accuracy, bias, etc.
- Competency: Lawyers should not substitute their legal judgment with outputs from AI tools.
- Disclosure: Lawyers should consider disclosing to clients when they use AI tools.
- Fees: Lawyers may charge clients for work product and time using AI, but only actual time spent refining AI work product – not for the cost of efficiency by using AI tools.
Much of this guidance is common sense, but it’s a good reminder to stay informed about the benefits and risks of AI and use these tools accordingly.
California isn’t the only state to issue guidance. In November, the Florida Bar Association issued a Proposed Advisory Opinion and also formed a Special Committee on AI Tools & Resources. New York, New Jersey, and the American Bar Association have created similar task forces and other states, including Illinois, Kentucky, Minnesota, and Texas have formed working groups. If this is any indication, AI will continue to be a headline topic for 2024.