Researchers warn businesses, CEOs must ‘brace themselves’ for deepfake scams

Researchers warn businesses, CEOs must ‘brace themselves’ for deepfake scams

(L-R): Professor Rebekah Russell-Bennnett, Lucas Whittaker & Dr Kate Letheren (Provided)

Businesses and CEOs are increasingly at risk of reputational damage, extortion and intellectual property (IP) theft as synthetic media like deepfakes become so realistic people struggle to tell fact from fiction, according to researchers from Queensland University of Technology (QUT).

In a paper titled Brace yourself! Why managers should adopt a synthetic media incident response playbook in an age of falsity and synthetic media, academics warn that individuals can only distinguish human and deepfake faces about 50 per cent of the time.

Deepfakes use artificial intelligence to generate completely new video or audio, with the goal of portraying an event that did not actually occur in reality. 

PhD researcher Lucas Whittaker, from QUT’s Centre for Behavioural Economics, Society and Technology (BEST), said because synthetic media could be hard to distinguish from authentic media, the risk of an attack was high for all organisations, large or small, private or public, service or product oriented.

“It’s increasingly likely that individuals and organisations will become victims of synthetic media attacks, especially those in powerful positions or who have access to sensitive information, such as CEOs, managers and employees,” Whittaker said.

“Inauthentic, yet realistic, content such as voices, images, and videos generated by artificial intelligence, which can be indistinguishable from reality, enable malicious actors to use synthetic media attacks to extract money, harm brand reputation and shake customer confidence.

“Artificial intelligence can now use deep neural networks to generate entirely new content because these networks act in some ways like a brain and, when trained on existing data, can learn to generate new data.”

Whittaker also said that bad actors could collect voice, image or video data of a person from various sources and put them into neural networks to mimic, for example, a CEO’s voice over the phone to commit fraud.

“We are getting to the point where photorealistic images and videos can be easily generated from scratch just by typing a text description of what the person looks like or says," he said.

“This alarming situation has led us to produce a synthetic media playbook for organisations to prepare for and deal with synthetic media attacks.”

“With such risks becoming more common, anyone whose reputation is of corporate value including every CEO or board member that has been featured on earnings calls, YouTube videos, TED talks, and podcasts must brace themselves for the risks of synthetic media impersonation,” the study noted.

To manage the risks, researchers recommend that businesses take proactive measures like conducting organisational education around synthetic media, appointing and training incident response personnel, devising escalation policies and engaging in penetration testing.

Organisations should also consider their threat environment by understanding what might be attacked, like the reputation of a CEO or other brand representative, and why such an attack may occur.

Other options include paying for services that can identify when deepfakes are created.

“For instance, startups around the world are offering services to detect synthetic media in advance or to authenticate digital media by checking for synthetic manipulation, as it so easy to trick people into believing and acting upon synthetic media attacks,” Whittaker said.

In the case an attack does occur, researchers recommend organisations engage in detection and analysis by conducting social listening, outsourcing media forensics, and gathering incident indicators to reveal the root causes of the event.

After detection, containment, eradication, and recovery can begin, which involves the organisation working with legal counsel and social media platforms to try and remove the content, in addition to reporting the incident to authorities or regulators.

Co-researcher Professor Rebekah Russell-Bennett, who is co-director of BEST, said the increasing accessibility of tools used to generate synthetic media meant anyone whose reputation was of corporate value must be aware of the risks.

“Every CEO or brand representative who has a media presence online or has been featured in video or audio interviews must brace themselves for impersonation by synthetic media,” Professor Russell-Bennett said.

“Their likeness could be hijacked in a deepfake that might, for example, put the person in a compromising situation or their voice synthesised to make a false statement with potentially catastrophic effects, such as consumer boycotts or losses to their company’s share value.

“Every organisation should have a synthetic media incident response playbook which includes preparation, detection, and post-incident procedures.”

Get our daily business news

Sign up to our free email news updates.

 
Four time-saving tips for automating your investment portfolio
Partner Content
In today's fast-paced investment landscape, time is a valuable commodity. Fortunately, w...
Etoro
Advertisement

Related Stories

Musk, Wozniak, tech leaders sign open letter calling for pause on “out-of-control” AI development

Musk, Wozniak, tech leaders sign open letter calling for pause on “out-of-control” AI development

An open letter signed by more than 1,000 artificial intelligen...

Temple & Webster leans into AI in a big way with ChatGPT now powering all live chats

Temple & Webster leans into AI in a big way with ChatGPT now powering all live chats

Online furniture retailer Temple & Webster (ASX: TPW) has revea...

Struggling AI group raises $60m to make it Appen

Struggling AI group raises $60m to make it Appen

A Sydney-headquartered company providing the grunt work for the art...

Have you given away work secrets on ChatGPT?

Have you given away work secrets on ChatGPT?

Whether it’s mild curiosity or the business imperative of sta...