CyberCube warns of major potential deep fake risks

The use of deep fake video and audio technologies could become a major cyber threat to businesses within the next two years, according to cyber analytics specialist CyberCube.

The ability to create realistic audio and video fakes using AI and machine learning has grown steadily. An increased dependence on video-based communication has increased the supply of data from which to build photo-realistic simulations of individuals, which can then be used to influence and manipulate people.

In addition, ‘mouth mapping’ -- a technology created by the University of Washington -- can be used to mimic the movement of the human mouth during speech with extreme accuracy. This complements existing deep fake video and audio technologies.

CyberCube’s head of cyber security strategy Darren Thomson, said: “As the availability of personal information increases online, criminals are investing in technology to exploit this trend. New and emerging social engineering techniques like deep fake video and audio will fundamentally change the cyber threat landscape and are becoming both technically feasible and economically viable for criminal organisations of all sizes.

“Imagine a scenario in which a video of Elon Musk giving insider trading tips goes viral -- only it’s not the real Elon Musk. Or a politician announces a new policy in a video clip, but once again, it’s not real. We’ve already seen these deep fake videos used in political campaigns; it’s only a matter of time before criminals apply the same technique to businesses and wealthy private individuals. It could be as simple as a faked voicemail from a senior manager instructing staff to make a fraudulent payment or move funds to an account set up by a hacker.”

The report warns insurers that there is little they can do to combat the development of deep fake technologies but stresses that risk selection will become increasingly important for cyber underwriters.

Insurers should also consider the potential of deep fake technology to create large losses as it could be used in an attempt to destabilise a political system or a financial market.

In March 2019, cyber criminals used AI-based software to impersonate a chief executive’s voice to demand the fraudulent transfer of US$243,000.

Thomson added: “There is no silver bullet that will translate into zero losses. However, underwriters should still try to understand how a given risk stacks up to information security frameworks. Training employees to be prepared for deep fake attacks will also be important.”

    Share Story:

YOU MIGHT ALSO LIKE


The Future of Risk & Resilience with AI & Data
CLDigital's Co-Founder, Tejas Katwala, joins CIR Magazine to discuss how CLDigital is transforming enterprise risk and resilience. By integrating business processes, AI and data-centric strategies, organisations can move beyond compliance to proactive risk management – simplifying operations, strengthening resilience, and driving business performance. Listen now to explore the future of intelligent risk management.

Communicating in a crisis
Deborah Ritchie speaks to Chief Inspector Tracy Mortimer of the Specialist Operations Planning Unit in Greater Manchester Police's Civil Contingencies and Resilience Unit; Inspector Darren Spurgeon, AtHoc lead at Greater Manchester Police; and Chris Ullah, Solutions Expert at BlackBerry AtHoc, and himself a former Police Superintendent. For more information click here

Advertisement