Do trial lawyers currently have an ethical duty to consider AI as part of their litigation plan? The answer is yes. 

Artificial Intelligence (AI) in trial practice is not something to be considered in the future. AI is here now, and many judges and lawyers are already using AI in litigation. Lawyers who fail to consider AI in connection with representing clients in litigation may breach ethical duties.

It All Starts with Ethics

Duty to Learn About New Technology

 Trial lawyers have an ethical duty to be aware of existing technologies such as AI and to use them when appropriate to obtain efficiencies for clients and to improve litigation results.  ABA Model Rule 1.1 Competence provides that “A lawyer shall provide competent representation to a client. Competent representation requires the legal knowledge, skill, thoroughness and preparation reasonably necessary for the representation.”  The Comment to Rule 1.1 was updated in 2012 to include Comment 8, which explicitly recognizes the role of technology in legal competence. It states: “To maintain the requisite knowledge and skill, a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology” (ABA, Model Rules of Professional Conduct, Rule 1.1, Comment 8). For a deep dive into Comment 8 and examples of disciplinary actions check out Z. Rosenof, The Fate of Comment 8: Analyzing a Lawyer’s Ethical Obligation of Lisa Technological Competence, 90 U. Cin. L. Rev. (2022). The requirement for technological competence is dynamic. As technology evolves, so too does the standard for what constitutes competent representation. Lawyers must engage in continual learning to stay abreast of technological advancements that can impact their practice and their clients’ interests.

Duty to Be Efficient and Save Costs

Litigators and their clients are aware that costs of litigation can be very expensive. ABA Model Rule 1.5, requires fees to be reasonable.  If there is a more efficient, lower cost way for a task to be done, the lawyer has an ethical duty to consider this. Many cases in litigation have thousands of pages of documents such as medical records, emails, depositions, witness statements and the like. 

Consider by example the following scenario based upon personal experience.  Before AI, preparation of an arbitration with 5,000 pages of medical records and expert reports to review and analyze in which medical causation is the central issue would have taken several days to digest and summarize the relevant records. In addition, an arbitration notebook tabbed with key pages would be necessary and would take additional time to produce in advance of arbitration. It has been my experience performing these same tasks with a closed (non-public facing) AI-powered litigation tool, which is trained to read analyze and summarize medical records, takes a fraction of the time to complete. In addition, the AI tool provides pin cites allowing my team to check the summary against the supporting records to verify accuracy; an efficient and precise outcome at an exponentially lower cost for the client. 

AI tools can handle thousands of pages of medical records in a short period of time, extract clinical data, create chronologies, and assist with analysis. Some examples of currently available AI medical records tools are Wisedocs and DigitalOwl.

Use of AI by Lawyers in Litigation 

Chief Justice Roberts notes in his 2023 Year-End Report on the Federal Judiciary  that AI has introduced an opportunity for transformational change in the law and litigation in particular. He compared the AI era to historical developments such as the invention of electricity and the first use of computers in the court system.  He observed that “Rule 1 of the Federal Rules of Civil Procedure directs the parties and the courts to seek the ‘just, speedy, and inexpensive’ resolution of cases. Many AI applications indisputably assist the judicial system in advancing those goals. As AI evolves, courts will need to consider its proper uses in litigation.” Justice Roberts concludes by noting: “I predict that judicial work—particularly at the trial level—will be significantly affected by AI. Those changes will involve not only how judges go about doing their job, but also how they understand the role that AI plays in the cases that come before them.”

AI in Legal Research 

By now most of us have heard of potential problems with AI-assisted legal research including the New York lawyer who submitted fake legal case citations in a brief to the court using unverified ChatGPT sources. If this case is not familiar to you, the details can be found here.  Lawyers should avoid public-facing generative AI programs which can be prone to “hallucinations” such as ChatGPT at least without verifying all citations and associated legal arguments.  Major legal publishers including Westlaw and Lexis currently offer lawyers options for AI-assisted legal research using closed data bases which only include real cases and are not subject to hallucinations.  There are a growing number of other options. Lawyers have reported substantial savings (24%) in the time it takes to produce the legal answers and arguments necessary to brief the issues in a motion. Even more impressive, a recent study showed lawyers using AI-powered legal research were more likely to find authorities supporting their position which would have been missed using traditional computer legal research. The Real Impact of Using Artificial Intelligence in Legal Research

As AI legal research tools are now readily available, Judges and law clerks are expected to use AI to evaluate briefs and legal arguments of litigators. Keep an Eye on Judges’ Chambers for Insight Into AI Adoption in Law, New Jersey Law Journal, November 23, 2023. In West Virginia, The Judicial Investigation Commission was requested to provide an ethical opinion on whether judges could use AI in doing their work deciding litigated cases. The advisory opinion states “Judges have a duty to remain competent in technology, including AI. The duty is ongoing. A judge may use AI for research purposes.”  The opinion stated that judges should never use AI to determine the outcome of a case and that use of AI in drafting opinions or orders should be done with extreme caution. The Commission compared proper use of AI by judges to traditional use of law clerks – helpful to the reaching the outcome, but not the decision maker. Deciding the outcome of cases is reserved to the judge. 

Final Thoughts – AI In Evidence?

With the increase in available AI tools there has also been an increase in “deep fakes” – false computer-generated images – in photographs, (ask Taylor Swift) videos, and voice recordings. With all we have recently come to learn about the prevalence of AI-generated deep fakes, trial lawyers need to consider checking their opponent’s trial exhibits (and even your own) for AI alteration of image and sound files. Our next AI Law Answers blog post will take a detailed look at deep fakes in litigation.

Resources to Learn More About Ethical Use of AI in Litigation 

If you are a trial lawyer or litigation manager and you want a good primer about how AI works and how to ethically use AI in litigation, here are some resources:

  1. PRACTICAL GUIDANCE FOR THE USE OF GENERATIVE ARTIFICIAL INTELLIGENCE IN THE PRACTICE OF LAW, State Bar of California ;
  2. PROPOSED ADVISORY OPINION 24-1 REGARDING LAWYERS’ USE OF GENERATIVE ARTIFICIAL INTELLIGENCE, Florida Bar 
  3. Understanding Impacts of AI on the Legal Profession – Computer Science for Lawyers at Harvard
  4. Generative AI for Lawyers: What it Is, How It Works, and Using It for Maximum Impact
  5. Generative AI for the Legal Profession, Berkeley Law
  6. AI Litigation Database Ethical Tech Initiative, The George Washington School of Law 
Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Bruce H. Raymond Bruce H. Raymond

Bruce Raymond is a Partner in MKCI’s Hartford office and co-chairs the firm’s AI Committee which oversees deployment of AI technologies at the firm. He regularly uses generative AI as part of his litigation practice and advises clients including law firms on AI…

Bruce Raymond is a Partner in MKCI’s Hartford office and co-chairs the firm’s AI Committee which oversees deployment of AI technologies at the firm. He regularly uses generative AI as part of his litigation practice and advises clients including law firms on AI issues, policy, and ethics. Bruce has been lead trial counsel in over 1000 litigated cases in Connecticut and Massachusetts. His practice focuses on Insurance defense, toxic tort including asbestos, product liability, business litigation, construction defect, professional liability, cyber liability, data security and privacy, IT errors and omissions, fire loss , premises liability, and traumatic brain injury. Bruce is a former DRI Board Member, CLM Regional Director, and President of The Connecticut Defense Lawyers Association.