One might think that at some point, lawyers would learn what they absolutely cannot do with artificial intelligence. But that day is not today.

The recent case of a New York federal judge sanctioning lawyers who submitted a legal brief written by the artificial intelligence tool ChatGPT, which included citations of non-existent court cases, has raised important questions about the role of AI in the legal profession and the need for ethical guidelines.

The use of AI in the legal profession is not new, and many law firms have been experimenting with AI-powered tools to automate tasks such as document review, legal research and contract analysis. A law-specific AI tool has even raised a significant round of venture capital to automate contracts using generative artificial intelligence.

The recent ChatGPT incident brought attention to the potential dangers of excessive reliance on AI without adequate supervision and training. Lawyers Steven Schwartz and Peter LoDuca used ChatGPT to create a legal motion for a New York federal court case. Their motion contained references to six court decisions that did not exist. As a result, when the judge and the opposing counsel could not locate these cases, the judge requested that Schwartz and LoDuca present the complete texts of the decisions mentioned by ChatGPT.

The reason the cases were missing was straightforward and truly mind-blowing: ChatGPT made them up.

In the court hearing, Judge P. Kevin Castel expressed his intention to consider imposing sanctions on Schwartz and LoDuca for using ChatGPT. Schwartz attempted to explain why he hadn’t conducted further research on the cases provided by ChatGPT, stating that he was unaware of its capability to fabricate cases. The judge remained unconvinced, emphasizing that lawyers have a duty to verify the accuracy of the information they present in court.

The case involving ChatGPT highlights significant concerns regarding the ethical use of AI. The following points summarize the key lessons to be learned:

—AI is fallible: While AI-powered tools can be valuable, they are not error-proof. Lawyers must exercise caution and not blindly rely on AI-generated information without verifying its accuracy.

—Training is crucial: Lawyers using AI tools should undergo proper training to understand such tools’ limitations and potential risks. They should also be well-versed in AI usage’s ethical considerations and adhere to ethical guidelines.

—Oversight is essential: Law firms need effective oversight mechanisms to ensure responsible and ethical use of AI. This involves monitoring AI tool usage, providing training and support, and enforcing ethical guidelines.

—Transparency is key: Lawyers who use AI-powered tools must be transparent about their use of AI and should disclose it to clients and the court. This disclosure should include providing information about AI’s limitations and potential risks.

All of these points should be obvious, but they aren’t. Absent ethical guidelines and oversight mechanisms to ensure that AI is used responsibly and ethically in the legal profession, where we are today is having to rely on the good/best/even borderline OK judgment of lawyers who aren’t technology experts and don’t understand the breadth and scope of what AI can and can’t do.

While in the long run, proper training, oversight and transparency are essential to ensure that AI is used to uphold the integrity of the legal profession, for now, lawyers who don’t fully appreciate how to use it responsibly shouldn’t use it at all.