AI

ETSI publishes ‘deepfake’ study

ETSI has produced a new report on the use of AI in the production of ‘deepfakes’. The report – known as ETSI GR SAI 011 – is being released by the organisation’s Securing AI group.
ETSI publishes ‘deepfake’ study
Like

Share this post

Choose a social network to share with, or copy the URL to share elsewhere

This is a representation of how your post may appear on social media. The actual post will vary between social networks

According to ETSI, the report “focuses on the use of AI for manipulating multimedia identity representations and illustrates the consequential risks, as well as the measures that can be taken to mitigate them.”

A statement released by the organisation continues: “ETSI GR SAI 011 outlines many of the more immediate concerns raised by the rise of AI. [This includes] the use of AI-based techniques for automatically manipulating identity data represented in various media formats, such as audio, video, and text.

“The report describes the different technical approaches, and also analyses the threats posed by deepfakes in various attack scenarios. The ISG SAI group works to rationalize the role of AI within the threat landscape.”

Chair of the Securing AI group, Scott Cadzow, said: “AI techniques allow for automated manipulations which previously required a substantial amount of manual work, and - in extreme cases - can even create fake multimedia data from scratch.

“Deepfake can also manipulate audio and video files in a targeted manner, while preserving high acoustic and visual quality in the results. Our ETSI report proposes measures to mitigate them.”

Author: Philip Mason

Please sign in or register for FREE

If you are a registered user on Critical Communications Network, please sign in