Warnings AI generated 'deepfakes' could be used to spread fake news

A new study's revealed two thirds of people are worried by the lack of regulations around AI generated political content.

Author: Stan TomkinsonPublished 1st May 2024
Last updated 1st May 2024

As we move further into the digital future, there are fears AI-generated misinformation damage our democratic systems.

This comes as a new study has found that nearly four in ten people in the North West have stumbled upon deepfakes online with many none the wiser until it's too late.

A deepfake is an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.

Jake Moore is a cyber expert and former Police Head of Digital Forensics, he is urging people to be wary: "Until regulations are there and until there are laws that help look after the public I think people need to be more educated about what these videos and voice cloning tools can actually achieve.

"They can be used for any sorts of misinformation but we're also seeing them used in financially motivated crimes so to manipulate people into doing something

"You may see there is a voice note left in your messages and it might sound just like your contact or it may sound like the boss and this is the type of influence that is usually needed to make people do something they wouldn't normally do such as send some money or divulge a password."

A third of us admit we're not sure if we could tell real from AI-generated fakes when it comes to videos, images, and audio.

The study found that:

• 67% of People In North West and Wales are deeply concerned about the inadequacy of regulations around AI-generated political content.

• 4 in 10 have come across deepfakes online – a further 30% aren’t sure (but think they have)

• People aren’t overly confident they could spot video (34%) and audio (38%) content and imagery (34%) that had been manipulated by AI either

• Nearly half (48%) of those surveyed wouldn’t trust political content they saw on social media in the run up to an election

In January alone, more than 100 deepfake video advertisements impersonating Rishi Sunak were paid to be promoted on Facebook in the last month alone, according to research that has raised alarm about the risk AI poses before the general election.

In the UK, recent legislative actions have addressed the misuse of deepfake technology, particularly regarding the creation and distribution of non-consensual deepfake imagery.

This has been included in amendments to the Criminal Justice Bill as part of a broader effort to combat online harms, which criminalises the sharing of 'deepfake' intimate images.

The survey from cyber-security experts, ESET, echoes a chilling concern: 67% of us are deeply troubled by the lack of regulation around AI-generated political content.

First for all the latest news from across the UK every hour on Hits Radio on DAB, at hitsradio.co.uk and on the Hits Radio app.