The authentication of trial exhibits in the AI era has become a necessary byproduct of the proliferation of the use of artificial intelligence. Artificial intelligence algorithms (AI) have introduced a new dynamic challenge to the practice of law, particularly in the area of civil litigation. As this new technology has quickly evolved, questions revolving around the reliability, admissibility and ethics of potentially AI-generated evidence has become an increasingly relevant issue for the litigator.

With the advent of the application of AI upon case evidence, courts must now grapple with the question of whether AI-manipulated evidence meets the standards of admissibility set forth in traditional rules of evidence. Thus, establishing the reliability, authenticity and ultimate admissibility of evidence has increasing becoming a necessary focus for the trial attorney. This issue is best highlighted with the use of a “deep fake.”

An AI evidentiary “deep fake” is best defined as manipulated audio or video either created or altered by artificial intelligence algorithms. “deep fake AI, short for ‘deep learning’ and ‘fake’ is a technology that uses deep learning techniques and AI to generate highly convincing fake or manipulated digital information…” https://www.infosectrain.com . “Deep fakes” involve using advanced machine learning models to generate hyper-convincing simulations of objects, locations or individuals saying or doing things that never occurred. The realism of “deep fakes” poses a significant challenge in the legal setting, where authenticity of evidence is paramount. The admission of unchallenged or unauthenticated evidence can potentially have a profound impact on the outcome of a jury verdict or award. To meet this rising challenges trial lawyers and courts will undoubtedly need to implement measures to identify and authenticate digital evidence, especially evidence involving imagery, audio or video. This authentication likely will involve the expertise of a forensic analyst and the implementation of new protocols to ensure the veracity of the evidence being offered for admission. But what should a trial attorney do at the outset of receiving evidence, when she suspects a possible “deep fake” or is concerned with the possibility of the evidence being a product of AI?

“When it comes to AI-manipulated media however, there’s no single tell-tale sign of how to spot a fake.” https://www.medihttps://www.media.mit.edu. Nonetheless, there are several deep fake artifacts that you can be on the lookout for. At least at this stage in time, AI generated deep fakes may exhibit some detectable abnormalities include unnatural human behaviors such as blinking patterns, speech patterns and body synchronization. Listening closely to audio may reveal irregularities in the audio, such as sudden shifts in tone, background noise and varying pitch or reveal off-synchronization of speech to lip movement. In addition, AI algorithms in some instances can introduce artifacts or “glitches”, unnatural eye reflection, unexpected pixilation or inconsistent backgrounds during the generation of a deep fake photo or videos. When initially reviewing evidence, a trial attorney should pay attention to and challenge the authenticity of evidence demonstrating any distortions or anomalies which may suggest the offered evidence has been manipulated by AI.

For a litigator the best safeguard against suspected AI manipulated evidence being admitted into the court record is the retention of a forensic analyst to examine the evidence’s digital file for AI generated manipulations, discrepancies and anomalies not detectable by the untrained eye. The identification of a computer altered or AI generated piece of evidence may be the difference in winning or losing your case at trial. The expense of retaining such an expert is certainly justifiable when AI generated or altered evidence is suspected. It seems inevitable that the use of AI generated deep fake evidence will find its way in to our trial courts, if it has not already done so.

While there are efforts underway to safeguard against and regulate the generation and use of deep fake content It is incumbent upon today’s trial attorney to stay vigilant and on guard as to the evidence being offered in discovery and at trial, to best serve his client’s interest and help to maintain the integrity of the court system and case precedent.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Jonathan Ciottone Jonathan Ciottone

Jonathan Ciottone is the managing partner of the Hartford Office of MKCI, member of the firm’s Management Committee and co-Chair of MKCI’s AI Committee. Attorney Ciottone has over 25 years’ experience as lead defense and first-chair trial counsel on a broad range of…

Jonathan Ciottone is the managing partner of the Hartford Office of MKCI, member of the firm’s Management Committee and co-Chair of MKCI’s AI Committee. Attorney Ciottone has over 25 years’ experience as lead defense and first-chair trial counsel on a broad range of State and Federal civil matters including product liability, professional liability, school liability, civil sexual assault, maritime civil litigation, fire loss, traumatic brain and catastrophic injury, and other insurance defense matters including multi-district litigation and mass tort. Attorney Ciottone has regularly worked to use cutting edge technologies such as generative AI to improve his effectiveness as defense counsel and more efficiently serve his clients.