Taking a Closer Look at AI, ChatGPT and the Legal Field

As the legal industry continues to embrace technology, artificial intelligence (AI) and large language models (LLMs) are gaining increasing attention as tools for legal research and analysis. One such AI tool is ChatGPT, a language model developed by OpenAI, which uses natural language processing to generate human-like responses to text-based queries. Legal professionals are exploring the potential of ChatGPT and other AI and LLMs to streamline research, analyze complex legal documents, and improve decision-making processes. In this article, we’ll take a closer look at how these cutting-edge technologies are being used in the legal industry and explore their potential impact on the practice of law. 


If the above paragraph has you intrigued, well, you’ve been seduced by a robot. Such is the power of AI and LLMs like ChatGPT: They’re becoming so intuitive — and ubiquitous — that it’s hard to distinguish between what’s machine-generated and what’s not. For legal practitioners, the necessity of distinction goes even further: You need to be aware of what’s lawful and what’s ethical when engaging with AI tools. 

To help us out, we checked in with Tim Swan, a partner at Baker Hostetler, in practice for 18 years and focused on complex technology transactions. Swan is in the Baker Hostetler business group and has seen the use of AI and LLMs explode in recent years. 

What is the conversation surrounding AI in the legal workplace? How are people encountering AI there? 

TS: Recently, GPT has been front of mind for lots of clients and for us. Our clients’ employees, and our employees, want to us it. They ask if they “can” and what the parameters are around what they can do. A lot of that turns into follow-up questions as to what the client or employee is actually using ­— finding out what version of Chat GPT they are using, and the terms under which they’re using the application. 

At Baker Hostetler, we have a team called IncuBaker. They track and stay up to date on legal technology and help us and our clients develop solutions to improve through technology. They are working on procuring private versions of an AI LLM (large language model tool.) Once we have our own private instance of the tool, we can load documents into it without worrying about those becoming public or being used by competitors. A lot of the concern for our team is that you train the tool once, but if the vendor goes to a new version, you have to train all over again. So there can be a lot of cost and time in training and retaining these models — you have to balance the cost vs. ROI. 

We are also aware that some firms are using Chat GPT today in non-legal ways, like creating press releases. Other firms are looking at it to help with internal research and contracts drafting, but all of that has to be built up and trained.  

What are some of the potential legal issues AI raises?

TS: Whatever you put in the public Chat GPT instance, the terms of use say the vendor is allowed to use that input to train its model and keep copies. This creates concerns about confidentiality, and for lawyers, privilege. We are counseling clients to not put anything confidential in public LLM tools. 

Another concern is around infringement. Basically GPT scrubs the internet — that’s where it gets all its source information. So if it’s pulling software code, quotes, etc., we don’t know where it’s coming from because it doesn’t attribute what it spits outs. If you use ChatGPT outputs on your website or put it in the source code of your software, there’s no guarantee that what ChatGPT gave you wasn’t owned by somebody else. Your reproduction and distribution of the material have infringed upon someone else’s copyright. 

The interaction of AI tools with other applications and systems is another interesting issue. Over the last few years, lots of clients have procured commercial AI tools, and we have been doing it ourselves at the firm. When procuring any AI, you need to think through how to bring it into your systems, how to train it, and who owns the data. Additionally, when you bring in a chatbot or a RPA [robotic process automation] tool, it needs to access other third-party software in your environment. So what happens when one program talks to another in a similar manner as to how humans interact with the program? What does that mean? Are you allowed to do that under your contract? How much does it cost? That dynamic creates lots of interesting questions. 

What are some of the under-appreciated concerns with using AI

TS: GPT is [working on] hallucinations. [Hallucinations are confident answers generated by AI that has no grounding in any of its training data. These are also referred to as delusions or confabulations.] Sometimes it gives you the right answers, sometimes it gives you completely whacky answers. Almost more dangerous for lawyers is sometimes it gives you slightly wrong answers. It’s pretty close to right, and if you read through quickly, you’d say, ‘Yeah, that looks good.’ 

One thing it’s good at doing is summarizing things for you. You can say summarize this website for me, and it does a pretty good job. The IncuBaker team did an experiment where it had ChatGPT summarize an ABA article on AI impacting the legal community. The summary was pretty close, but there were things it conflated or got wrong. For one, the ChatGPT summary said firms were  seeking to move into law-margin practices — the article didn’t say that, but it did mention potential impacts on lower margin practice areas. those close-but-not-quite problems would be a big issue in a field where we’ve got to be precise and accurate.  

The latest version of ChatGPT, ChatGPT-4, passed the bar exam by a “significant” margin. Will AI attorneys be a reality one day? 

TS: It’s not surprising when you understand what ChatGPT is — the multiple choice of the bar exam is a lot of memorization. A computer is going to do that way better than most humans. I think what gets conflated is that a human sat down and took the test, and a computer “sat down” and took the test - they both passed, so they’re now equivalent. That’s missing a lot of the point. ChatGPT is much more narrow in what it can do. It’s really good at answering a single question, or even a more complex question, but putting everything together in context? Think of what clients value — they don’t just ask one question and get one answer. Our value is built on years of experience with context and subtext, and goes beyond just the most recent question.  

What are lawyers themselves thinking about the potential use of AI? 

TS: The answer is all over the board. There are certainly lawyers who are concerned about it. Others are really excited like the IncuBaker team — their job is to find these cool technologies and implement them. It’s an exciting thing for people, certainly for younger attorneys, to think about how they can influence where the practice of law goes in the near future. Having a brand-new technology always creates some opportunities for younger associates to be the ones learning it. 

I think most attorneys are probably waiting to see. I’ve been practicing 18 years, right after Blackberries came out. There’s been so much technology evolution over the last ten years in what we do in the practice of law, this just seems to be the next step. It will be interesting to see how it goes, but it’s going to be an iterative practice for the reasons we’ve talked about. There are currently too many mistakes in what’s publicly out there, so we have to figure out the right private tools or develop our own. We need to get very comfortable 

Shokoohe is the director of communications for the CBA.