Every sector is experiencing the transformative effects of digital innovation, and the legal sector is certainly no different. AI, particularly AI-powered custom GPT bots, can help law firms better serve their clients as long as the risks associated with AI are correctly mitigated.
First, let’s look at AI’s broader applications for the legal sector before delving into some of the features of a CustomGPT bot.
Whilst we won’t cover the risks of using AI in law in too much depth in this article, we will note that there are potential pitfalls, and law firms must embrace AI with care.
A Goldman Sachs report published in March 2023 found AI has the potential to be even more disruptive in the legal sector than other industries and that AI could automate as much as 44% of legal tasks.
Let’s take a look at the use cases of AI for law.
Law firms must digest and disseminate the information from thousands of documents, often searching extensively for keywording, clauses, or examples. If AI can provide the same diligence as the human eye, it could hugely accelerate these searches. It’s estimated that e-discovery is one of the most widely applied use cases of AI in law to date.
AI is not yet ready and certainly not accurate enough to replace a lawyer or legal expert, but for basic documents and contracts, it can help create a first draft or outline, as long as the output is thoroughly checked.
AI also has the potential to summarize key issues or actions in a document if it can demonstrate the accuracy of an expert and is deemed reliable enough. JPMorgan is already using the COIN program to decipher commercial loan agreements, saving the company up to 360,000 lawyer hours.
ChatGPT could be used to analyze legal data to identify patterns and trends. This has the potential to predict the outcomes of cases, identify risks, and, on a larger scale, identify areas for legal reform.
Law firms are often slowed by the volume of basic and admin tasks that need to be performed. AI-powered visual assistants have the potential to safely assist lawyers and paralegals with non-expert tasks such as scheduling appointments and sending simple emails.
AI-powered chatbots can be built to pull only from a provided data set, eliminating the risk of “hallucinations” and other false information. This kind of chatbot can safely answer simple client questions, perhaps about general law, and be programmed to refer more complex questions to a human expert.
PwC added a legal AI chatbot internally for its 4000 staff in March 2023 to help generate insights and recommendations based on large volumes of data. Its outputs are overseen and reviewed by PwC professionals. Sandeep Agrawal, PwC’s Global Leader for Legal Technology, PwC UK, said at the time:
“Integrating Harvey into our day to day activities will free-up much needed time and resources allowing our people to focus more on innovation and value accretive tasks.”
The world’s largest global law firm, Dentons, launched a version of ChatGPT in August 2023 with aspirations to incorporate generative AI into its daily workflows. In June 2023, Denton penned an article discussing the responsible use of ChatGPT and its limitations with some examples of recent AI confidentiality scandals.
The risks of using AI in law potentially go beyond the simple list we’ve created. The key takeout for legal professionals when considering AI is the need for in-depth research in order to take advantage of this nascent technology safely.
AI models can be complex, and they don’t necessarily explain or clarify the sources of their information. This is a particular concern for law. As it stands, the content generated by ChatGPT and competitors, even for non-legal purposes, needs thorough checks for accuracy.
However, with CustomGPT’s use of citations, this challenge is addressed by providing transparency about the sources of its information, making it easier for users, particularly in the legal field, to verify and trust the content generated. This feature not only enhances the model’s reliability but also aligns with the critical need for accountability and traceability in legal applications.
Not only has ChatGPT been found to falsify information or deliver “hallucinations” presented as fact, but it can also simply be inaccurate. The speed at which OpenAI’sChatGPT and competitors are iterating new versions and capabilities and the newness of the technology overall means it’s difficult to determine how far these inaccuracies permeate. Research in July 2023 revealed that ChatGPT’s accuracy was declining rather than improving.
This is an area where CustomGPT’s anti-hallucination technology can benefit users by providing more reliable and accurate responses, reducing the occurrence of misleading or incorrect information. By focusing on minimizing hallucinations, CustomGPT can enhance the credibility and dependability of AI-generated content, especially in fields where accuracy is paramount.
If legal firms are to use AI, these professionals must be accountable for any output, its accuracy, and its impact. There are already cases where legal professionals have suffered consequences for using AI-generated content that has been inaccurate.
This is an example where the principle of “human in the loop” becomes crucial, emphasizing the need for a robust editorial process where subject matter experts (SMEs) play a pivotal role in supervising and validating AI-generated content. This approach ensures that the final output adheres to legal standards and ethical norms, while harnessing the efficiency and analytical capabilities of AI.
There are massive implications for training AI models on clients or sensitive information or even just providing this information for summary or analysis. If AI is to be used, it must be in such a way that it doesn’t store, utilize, share, or otherwise risk or expose private data.
Law is a particular area where the understanding of human behavior and nuances is essential and where reasoning and empathy, as well as a strict moral compass, are required. These very human characteristics are not something that AI has, at least up to now.
Dr. Andrew Fletcher. The director of AI Strategy and Partnerships for Thomson Reuters Labs was asked if generative AI is ready for primetime in legal. His response gives some idea of the status quo on AI for the legal profession.
“It depends. Are we looking to automate something, or augment something? Those are two really different things, especially when it comes to legal professional work,” said Fletcher. “Automation is done with caution because you’re focusing on the outcome being 100% correct. Augmentation is about putting tools in the hands of experts who make decisions based on what tools tell them. And this is absolutely ready for primetime.”
A CustomGPT chatbot is easily created through CustomGPT’s zero-code interface. The AI chat assistant’s persona can then be refined, and perimeters can be set for its responses.
A CustomGPT bot uses OpenAI’s ChatGPT-4 technology but adds a layer of protection. When the bot is populated only with an organization’s selected data, it will only generate responses purely based on that data set. This removes the potential for inaccuracies and hallucinations.
Massachusetts Institute of Technology (MIT) chose CustomGPT to Create Generative AI for Entrepreneurs because of CustomGPT’s ability to prevent inaccuracies. This MIT and CustomGPT case study closely examines AI Accuracy through Anti-Hallucination Measures. It illustrates how the CustomGPT platform can remove some of the unknowns and risks of ChatGPT while providing bots that deliver a next-level customer experience.
A CustomGPT bot can answer basic client questions 24/7, providing better service and freeing up an organization’s valuable time for more in-depth human interaction. But it also has much more capability. For example, by customizing the persona of a CustomGPT bot, you can enable it to identify situations where a one-to-one meeting is required, and CustomGPT can schedule a meeting. If you’d like to try CustomGPT for free and without obligation, try our live demo.