The Growing Dangers of AI in Legal Practice: From California to Georgia and Beyond
- sara2296
- 1 day ago
- 5 min read

Why it Matters for All Lawyers
In this new era of technology, everything moves faster and it's almost all aided by AI. The generative-AI tools (chatbots and large-language-models) that promise to help with research, drafting, brainstorming are already part of many career paths, especially law practices. But what happens when the tool makes a mistake, invents case law, or mishandles a client’s confidential info? That risk isn’t theoretical any more—it’s becoming real in states across the United States, from California to Georgia, and there are some serious consequences for lawyers that get caught up in the wave of new tech.
What California’s Example Teaches Us
Recently an appellate court in California sanctioned a lawyer for including 21 made-up “quotes” generated by AI in an opening brief and fined the lawyer $10,000, publishing the opinion as a warning for other attorneys across the country. The California judiciary has now issued Rule 10.430, which reflects the courts' position on generative-AI, including requiring the verification of accuracy of the content of filings and other court documents, prohibiting the inclusion of private data into public AI systems, and insisting upon disclosure if a public-facing work is entirely AI output.
The State Bar of California’s Practical Guidance reminds lawyers of their duties of competence, confidentiality, supervision and candor, which apply even with the use of generative AI. Simply put, the use of AI cannot be relied upon without supervision, verification, and human judgment.
What’s Happening in Georgia – and Why It Should Concern You
Georgia is very much in the mix now in the consequences of the use of generative AI in legal filings and the lessons from both states apply to lawyers across the country.
Recent disciplinary/penalty examples
In Georgia, a divorce attorney was fined $2,500 by the Georgia Court of Appeals after citing non-existent cases (determined to be “hallucinations” likely from AI). The order issued by the judge said her citations deprived the opposing party of a chance to respond appropriately.
In another Georgia case, a federal judge chastised the attorney and ordered that she must disclose her erroneous use of AI in all court proceedings for the next 5 years. The judge criticized her “cavalier attitude” in failing to check the AI’s output, citing at least 17 invented or misquoted cases among her filings.
The Judicial Council of Georgia’s Ad Hoc Committee on AI and the Courts produced a report that highlights AI’s promise in the legal world — but also emphasises potential risk areas: ethical application, data security, procedural integrity, and vendor/technology oversight among them.
The State Bar of Georgia has a committee studying how the Georgia Rules of Professional Conduct apply to AI use by attorneys.
Key takeaways for Georgia (and all jurisdictions)
Georgia courts are not giving lawyers a free pass. They are sanctioning attorneys for AI-generated errors just as California did. The fact that Georgia’s oversight is actively evolving means policy and expectations may change quickly. Being ahead of the curve is wise. It is important for all attorneys to stay informed on the courts' decisions regarding AI and to stay vigilant if they choose to use generative AI in the production of their filings or court documents.
The risks aren’t just in citation errors—also confidentiality, vendor use, client disclosure, competence.
The “AI tool did it” defense is not going to hold water when facing a judge: attorneys are expected to verify the outputs, exercise independent judgment, and satisfy professional duties.
The Real Dangers of AI: Universal Risks for All Attorneys
Whether you practice in Georgia, California, Texas, or any other state, these risks apply:
Fictitious or incorrect citations.
AI may generate plausible-looking cases or quotes that don’t exist. If you file them, you risk sanctions, fee shifting, or reputational harm.
AI may also create realistic sounding summaries of judge's opinions or case law that misrepresents the actual content of the case they are citing, despite that case existing.
Confidentiality/data leak risk
If you plug client-confidential information into a public AI tool (or tool without proper safeguards), you may violate duties of confidentiality, privacy rules, or court rules.
Open-source AI generators do not promise the safe-guarding of inputted information and that data can easily be shared or leaked.
Competence & supervision
Your ethical duty of competence (knowing the law, staying current, understanding tools) includes understanding the uses and limitations of AI.
You are obliged to supervise staff or vendors using AI in order ensure outputs are reviewed and verified.
Clients may also choose to use generative AI to help them understand their case or contribute to your research. It is important to warn them of the danger as much as you are aware of it.
Baseless legal arguments
If you rely on AI-generated content without verification, or without disclosing where required, you may violate rules about the basis of your legal arguments.
Filing something with made-up case law also harms the process and the opposing party’s rights to respond to your filings.
Liability & malpractice risk.
If an AI output causes a case to be lost, or a sanction, or damages a client, you may face malpractice or disciplinary exposure.
Reputational risk
Beyond money or sanctions — being the lawyer who filed the fake-case brief could damage your practice and client trust.
Why This Is Especially Important for Georgia Attorneys
Georgia’s courts are already and will continue sanctioning attorneys for AI-related mishaps.
The Georgia judicial system is actively reviewing and planning for AI use.
The Georgia State Bar is examining how the Rules of Professional Conduct apply to AI and lawyer technology-related conduct.
Because Georgia courts are already encountering “hallucinated” cases in briefs, your opposing counsel or the court may be more skeptical of filings that appear generic, AI-like, or insufficiently verified.
If you practice in both state or federal courts, you must manage multi-state risk: the expectations in California and other states may be stricter or evolve sooner.
Final Word
Generative AI is a powerful tool—and there’s no reason to reject it outright. But the legal profession is entering a phase where using it carelessly is not an acceptable defense. The experiences of California and Georgia show what happens when lawyers treat AI as a “black box” and file its outputs without review. Adopt the tool, but retain control. Verify everything. Protect your clients. Protect your reputation.
At Sara Stewart Law, our legal filings are NEVER wholly generated by AI and are ALWAYS written with human insight and care and attention toward the law and the specific facts of your individual .
The content of this article should not be construed as legal advice and has been shared for educational and informational purposes only.



Comments