X

Everybody Calm Down: AI Won’t Rule in Law

The legal profession needs to calm down about AI. The worry that certain aspects of legal work are being taken over by AI isn’t irrational; it’s already happening. But AI will not replace lawyers. What it will do is empower the lawyers who choose to embrace it. Lawyers who learn to use AI strategically may soon outperform those who don’t.

As law schools begin integrating generative AI tools into their curriculum, it’s time for employers to adapt. Accepting this new reality is the first step toward building a firm that is ready to assist clients using the best tools at their disposal. Adapting isn’t just about keeping up with the latest tech. It’s about understanding how the next generation of lawyers already uses it, and how that can benefit your practice.

Transforming Fear into Curiosity

Of course, there are valid concerns that students and practicing attorneys alike must consider when using AI. One of the most prominent is the issue of AI “hallucinations,” when an AI tool confidently generates incorrect or misleading results. The most prominent example of this leading to major problems comes from Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 466 (S.D.N.Y. 2023), where attorneys were sanctioned after submitting, and then defending, non-existent judicial opinions and quotes that were created by an AI system. The case, understandably, opened the flood gates for concerns about accuracy, ethics, and potential malpractice with the use of AI. Those concerns are real, but the response shouldn’t be to avoid AI altogether.

We’ve lived through technological transformation before: computers, email, cloud storage, and video conferencing. Each met initial skepticism followed by widespread acceptance. AI isn’t “doomsday tech,” it’s just the latest development in a long history of transformative tools, such as the calculator, Microsoft Word, and Westlaw. And just like a calculator doesn’t erase the need to understand foundational math, AI doesn’t eliminate the need for legal reasoning. When taught and used properly, AI will supplement rather than supplant your legal analysis.  

A common concern is that AI will “dumb down” the profession.  The truth is, the details of the work may change, but the theme does not. The legal field has always evolved: from colonies to states, from typewriters to Word, from casebooks to databases. Through it all, the core values of critical thinking, advocacy, and human judgment have remained central. Attorneys have long made an art out of analysis, and AI simply provides new tools to rationalize, conclude, and opine. But as to the creativity and judgment needed to make artistic arguments? That’s still entirely human.

What Law Students are Learning Now, and How it Can Work for You

Law schools aren’t waiting for law firms to catch up. They’ve already begun incorporating AI into the curriculum. For the three area law schools: Salmon P. Chase College of Law, University of Cincinnati College of Law, and the University of Dayton School of Law, AI usage is now taught in conjunction with research and writing. Further, each school has adopted AI policies directly into their student handbooks.

At Chase, Dean Judith Daar created the Chase Law AI Task Force, consisting of faculty and student members, to address the College of Law’s approach to AI. Early on, students are introduced to tools like Lexis Protégé, Westlaw CoCounsel, and ChatGPT during foundational courses in legal research, methods, and writing. According to Eric Young, Associate Dean for Library and Technology Services and Chair of Chase’s AI Task Force, “[t]hese courses teach students how to effectively, efficiently, and ethically use ChatGPT and similar tools,” but they did not stop there. Young explains that while legacy platforms like Lexis and Westlaw remain central, the school is also exploring relationships with newer vendors such as Briefpoint, ClaudeAI, Clearbrief, and Harvey. “A goal of the Task Force is to reach out to newer legal AI vendors... to better expose students to the ever-evolving world of legal AI tools,” he said.

But exposure to technology is just the beginning. Chase emphasizes critical thinking in tandem with technical skills. “AI should support, not supplant, independent legal thinking and strong writing skills,” Young notes, further stating that “[w]hen used effectively, efficiently, and ethically, AI tools can, for example, streamline legal research, help identify patterns, and assist with drafting, allowing students to focus more deeply on critical analysis, strategic thinking, and persuasive communication.” Students are taught to use AI as a tool while also being able to question AI outputs, recognize algorithmic bias, and maintain ethical awareness when using AI in practice. 

The law school has also taken steps to ensure that what students learn aligns with the needs of employers. “Chase graduates will bring a combination of practical skills in AI-assisted workflows and a strong grounding in professional responsibility related to AI use,” Young said. Informal partnerships with practitioners and adjunct faculty inform the curriculum in real time, helping bridge the gap between legal education and evolving firm needs. Looking forward, Young predicts AI literacy will soon be as essential as legal writing or research. “Lawyers will need to understand not only how to use these tools effectively but also their limitations, risks, and ethical implications,” he said.

Law students today aren’t being told to replace traditional skills with technology. They’re learning how to blend the two in a way that strengthens their legal work. By being trained to blend traditional skills with modern tools grounded in critical thinking and guided by professional responsibility, law students will enter the workforce with a skillset that is an asset in any practice. That’s not a threat to the legal profession; that’s a sign of its continued growth.

The Courts Have Approved AI Use

Ohio

The Supreme Court of Ohio and the Ohio Judicial System have taken proactive steps by launching an Artificial Intelligence Resource Library, which offers AI ethics guidance for judges, attorneys, and court staff. The site also links to a wide range of resources, everything from specific ethical guidelines to relevant rules and legal reports, both from inside and outside Ohio. To find it, visit the Supreme Court of Ohio website and search “Artificial Intelligence Resource Library Ohio.”

Since many readers work in Hamilton County, it’s especially important to know about Local Rule 49, which governs the use of artificial intelligence in court submissions. The rule outlines what’s expected of attorneys, including the disclosure of “AI-assisted technology in the creation or editing of any document or evidence submitted to the court.” It also requires a “general description of the AI technology used and its role in the preparation of the materials.” It’s a step in the right direction, one that encourages transparency while also acknowledging AI’s growing role in legal practice.

Kentucky

For AI use in Kentucky, attorneys should look towards Ethics Opinion KBA E-457 released by the Kentucky Bar Association. There, it outlines how an attorney is expected to conduct themselves when using AI, including disclosure both in the court room and to clients. 

Both states are approaching AI with caution and practicality. Judicial acknowledgment signals that AI isn’t going away, but its use must be grounded in transparency and ethical responsibility.

How Firms Can Stay Ahead: A Practical Policy Example

The saying “stay ready so you don’t have to get ready” rings especially true here. Our firm, which employs clerks from each of the aforementioned schools, has implemented an AI policy to better use the next generation’s skills.

“Every team member at Lawrence Associates has a duty to safeguard client confidentiality and to ensure we give our clients the best, most accurate answers to their legal questions. These duties apply when using Generative AI, which we define as large language models (Chat GPT, Claude) or natural language processing algorithms (Google, Westlaw). You should never feed client specific information into these programs, as it can be stored and re-published to third parties. This breaks client confidentiality. Instead, anonymize requests given to Generative AI so any output to third parties does not directly implicate any particular client or case. You can fill any specificity when you write your final product. 

Further, Generative AI can be – and will be – incorrect in its responses or rely on incorrect sources. You have an obligation to verify any information you receive from Generative AI and to correct it if necessary. Any final product you produce should be your own. While it is acceptable to use these tools to refine your work, Lawrence & Associates wants the final product to be a result of your knowledge, reasoning, and skills rather than the work of a program. Please direct any questions on this policy to the HR admin.”

This policy works because:

It’s flexible: While it gives examples like ChatGPT and Claude, it doesn’t restrict future tools, leaving room to grow with the technology.

It’s simple: The policy is short enough to fit comfortably on a page in the handbook, but long enough to leave little room for confusion. It’s easy to understand and easy to follow.

It’s trust-based: It relies on the due diligence of good hiring and training, then puts trust in the team. Just as we don’t micromanage someone’s use of Google or Westlaw, we shouldn’t panic over AI either. 

Why This Matters to Young Lawyers

For young clerks or attorneys, stepping into a firm with an established AI policy offers immediate peace of mind. It signals that the workplace isn’t afraid of change; it embraces it. More importantly, it makes them feel respected and valued as contributors, people whose education and developing skill set might be different than what employers might be used to but nonetheless are recognized as assets and insight rather than liabilities.

Instead of hiding how they are taught or tiptoeing around outdated expectations, they can feel empowered to contribute both meaningfully and efficiently in the way they know best. That clarity and trust from the beginning gives them space to grow, find their voice, and focus on producing great work. It creates an environment where innovation is welcomed and where they and their fellow law clerks can be shaped into capable, thoughtful future attorneys. That begins with a forward-thinking team, one that knows good lawyers are built, not born. Let your new hires build the future of your firm on the AI foundation they’ve learned in school.

Building the Future, Not Fearing It

AI isn’t here to replace lawyers, it’s here to reshape the way we practice. Law schools are already preparing students to use these tools with confidence and care. Forward-looking firms are doing the same by adopting flexible policies, promoting ethical use, and fostering cultures of trust.

The future of law is already underway. The only question is whether you’ll be ready to build with it, or risk falling behind.


Jeimarie Morales is a 2L at Northern Kentucky University’s Salmon P. Chase College of Law and a first-generation law student. She is currently working as a law clerk at Lawrence & Associates where she is growing her passion for litigation, client advocacy, and exploring different areas of the law. Justin Lawrence is the managing partner of Lawrence & Associates, a litigation firm operating throughout Kentucky and Ohio.  

 

 

 

Related