27 Jul 2020

The deep fake threat

From The Detail, 5:00 am on 27 July 2020
A collage of computer-generated humans

A collage of computer-generated humans Photo: thispersondoesnotexist.com

If you watch television, you may have come across an advertisement for Spark which shows Lord Ernest Rutherford sitting in an armchair, pontificating over the remarkable rate of human technological progress.

It is promoting the telco's new 5G tech standard for cellular networks, and was created, with the endorsement of Rutherford's estate, through manipulating archival footage and audio of Lord Rutherford recorded while he was alive.

This video is an example of a deepfake. A relatively innocuous one, all things considered - but what happens when this sort of technological power can be wielded by those with more sinister intentions?

In today's episode of The Detail, Emile Donovan speaks to researchers Tom Barraclough and Curtis Barnes, and media commentator Gavin Ellis, about a new reality in which you can't necessarily believe your eyes.

Deepfakes are difficult to define.

They're a branch of synthetic media - meaning media which is created by computers, but using real people, footage, or sounds, as a template.

Imagine you photographed every single adult in New Zealand from three different angles, and fed all those images into a computer which then analysed everything about them: the shape of the faces, the placement of the lips, every follicle on your head or cheek or upper lip.

After it had processed those 10 million or so images, the computer has a pretty good idea of what a human looks like: and so, using those templates, it could - if commanded to - create its own photo of a human, which looks completely realistic but doesn't exist.

This is how deepfakes are made and they're not just limited to photos. Following the same logic you can create videos, or audio as well.

When a lot of video or audio footage of a person exists, as is the case with politicians or famous actors, computers can process so much information they can actually create extremely convincing fakes of real people.

This is how Carrie Fisher appeared in the latest Star Wars movie, and how a video popped up in 2019 of Boris Johnson urging people to vote for his opponent, Jeremy Corbyn, in the 2019 British election. 

Spark has reanimated Lord Rutherford for its latest 5G ad.

Spark has reanimated Lord Rutherford for its latest 5G ad. Photo: Spark screenshot

Former NZ Herald editor Gavin Ellis says this technology has the potential to erode public trust in media, and undermine democracy.

"What I am worried about is the proliferation of deliberate disinformation via these vehicles, that have such a sense of authenticity about them that it's easy to be deceived."

Once upon a time, deepfakes could only be produced with high-end technology - think digital effects companies, like Weta Digital.

But the technology is developing at such a pace that now anyone with above-average tech skills and a high-end machine can create convincing deepfakes - and those people are using that power.

In May of last year, the US Speaker Nancy Pelosi was the victim of deepfaked audio, which slowed down a speech she had given, before adjusting the pitch - giving the impression Pelosi was slurring her words, or drunk. 

This technology does have positive uses: it can be used to help synthesise the voices of people who have lost their voices, to create fashion designs or melodies, or to design virtual models for companies who mightn't have the means to hire real people for advertising campaigns.

But they have a dark side too, particularly in the realms of pornography: with enough know-how, it's entirely feasible to create a pornographic image which combines an actual photo with the face of someone else - which has dark implications.

Tom Barraclough from research company Brainbox Ltd says while the technology's benefits probably outweigh its drawbacks, society needs to have a discussion about how best to mitigate its potential for villainy.

"There is a lot of room to be quite alarmed about using synthetic media technology to produce deceptive material that suggests a politician has said or done something that could be really harmful. And I think we can't downplay that.

"[But] the New Zealand law is pretty up to scratch."