Deepfake Evidence in Criminal Trials: The Emerging Dangers of Believing What You See and Hear .

31 July 2023

Deepfake technology is a process where a video or image can be manipulated by artificial intelligence (‘AI’) to create a false representation. Individual attributes, including intricate facial features, can be studied by AI programs to generate that individual in realistic videos or images that did not take place. This technology also extends to audio, where a person’s voice can essentially be harnessed and controlled using voice synthesis. Voice synthesis describes a process where algorithms can faithfully analyse, deconstruct, and then reproduce a person’s voice using the correct tone, pitch, and cadence. For example, here is a photo of Jim Carrey substituting Jack Nicholson in The Shining, and an audio clip of Morgan Freeman’s voice being synthetically generated to say words he did not say. Perhaps even scarier still, AI has brought back Notorious B.I.G.’s voice from the dead to perform ‘N.Y. State of Mind’ by Nas.

The culmination of the above process is that criminal practitioners may be presented with a piece of evidence that shows an individual doing and/or saying something that he or she did not do or say. With technological developments constantly aimed at achieving the highest levels of realism, it is not difficult to see how this can affect not only issues surrounding identity but also the commission of offences in criminal proceedings.

Consider by way of example a footage that has been submitted by a member of the public that claims to live on the same road as the defendant. This footage is grainy and not of the highest quality to begin with. It shows the defendant, wearing the exact same clothes to what he was later arrested in (a camouflage top with bright orange HOODRICH logo, Nike bottoms, and Crocs (no socks of course)), leaving his house and walking down the street before stabbing the victim and running away. A male voice that, on the face of it, sounds very much like the defendant can be heard in the video demanding money from the victim. As the defendant is running away, he looks in the general direction of the camera, and a very unique birthmark across his face can be seen.

Setting aside issues of admissibility for the moment, it is troubling indeed that every feature of this video could have been deepfaked. That is to say, either generated from scratch by AI, or the identifying features of the defendant (clothes, physical attributes, facial features, and voice) transposed onto the actual person who committed the robbery. One might consider the applicability of R v Turnbull [1977] QB 224 in emphasising the special need for caution before relying on identification evidence, as well as the possibility of honest mistakes being made. However, the concept of deepfake is ontologically distinct from the Turnbull regime. Deepfakes in the above context would have been created intentionally with the singular focus of framing the defendant at that scene committing the act. There is no mistake involved. Ancillary features such as his clothes and facial birthmark are carefully planted. In addition to these identifying features, the defendant can also be manipulated by AI to show him committing the act of stabbing the victim. By the same token, deepfake evidence can also support his alibi evidence elsewhere.

There are of course methods by which technicians can attempt to scrutinise these video and audio exhibits to ascertain their authenticity. For example, current common practices include analysing the metadata of the files to gain insights into their origins and subsequent tampering. This would reveal for example the times at which the files may have been modified. However, if the entire video was generated by AI from scratch, the metadata would show one date of creation, and the tampering would likely remain disguised. There are other ways to uncover deepfakes, ranging from context analysis (faked portions look out of context from their surroundings), light, shadow and pixel analysis, as well as deepfake detection tools. However, all of these methods rely on identifying minute inconsistencies and anomalies. These imperfections can be smoothed out by AI in certain situations to completely avoid detection.

Consequently, one can see deepfake evidence being of critical importance at trial. As of now, the current regimes addressing deepfake include the Online Safety Bill (not yet in force), The Fraud Act, Audiovisual Media Services Regulation, and The Data Protection Act. However, these frameworks focus on combatting misinformation, privacy concerns, and using deepfakes to commit fraud. There is no awareness yet on how deepfake evidence can be admitted as evidence at trial to affect the outcome.

It is submitted that it is only a matter of time before deepfake evidence begins to appear in criminal proceedings. There is a growing need for the courts and practitioners to start thinking about establishing rules and procedures to determine the authenticity of evidence where there is suspicion of deepfake involvement. Delaying this acknowledgment, awareness, and action will, in my view, make it more likely that individuals may be wrongly convicted by deepfaked evidence. 

Dr Justin Yang.
Related specialisms.