AI security innovations need to keep pace with cyber attacks

Artificial intelligence could become a significant factor in cyber attacks in the future and investment in security to counteract the threat is needed, according to a new report from cyber security specialist WithSecure.

While the use of AI in cyber attacks currently limited, a new report warns that this is poised to change in the near future. Co-created by WithSecure, the Finnish Transport and Communications Agency, and the Finnish National Emergency Supply Agency, the report analyses current trends and developments in AI, cyber attacks, and areas where the two overlap. It noted that at present, cyber attacks that use AI are currently very rare and limited to social engineering applications, (such as impersonating an individual) or used in ways that are not directly observable by researchers and analysts.

However, the report highlights that the quantity and quality of advances in AI have made more advanced cyber attacks likely in the foreseeable future. It suggests that target identification, social engineering, and impersonation are today’s most imminent AI-enabled threats and are expected to evolve further within the next two years in both number and sophistication.

It warns that within the next five years, attackers are likely to develop AI capable of autonomously finding vulnerabilities, planning and executing attack campaigns, using stealth to evade defences, and collecting or mining information from compromised systems or opensource intelligence.

Andy Patel, intelligence researcher at WithSecure, said: “Although AI-generated content has been used for social engineering purposes, AI techniques designed to direct campaigns, perform attack steps, or control malware logic have still not been observed in the wild. Those techniques will be first developed by well-resourced, highly-skilled adversaries, such as nation-state groups. After new AI techniques are developed by sophisticated adversaries, some will likely trickle down to less-skilled adversaries and become more prevalent in the threat landscape.”

While current defences can address some of the challenges posed by attackers’ use of AI, the report notes that others require defenders to adapt and evolve. New techniques are needed to counter AI-based phishing that utilises synthesized content, spoofing biometric authentication systems, and other capabilities on the horizon. The report also touches on the role non-technical solutions – such as intelligence sharing, resourcing, and security awareness training – have in managing the threat of AI-driven attacks.

Samuel Marchal, senior data scientist at WithSecure, added: “Security isn’t seeing the same level of investment or advancements as many other AI applications, which could eventually lead to attackers gaining an upper hand. You have to remember that while legitimate organisations, developers, and researchers follow privacy regulations and local laws, attackers do not. If policy makers expect the development of safe, reliable, and ethical AI-based technologies, they will need to consider how to secure that vision in relation to AI-enabled threats.”

    Share Story:


Cyber risk in the transportation industry
The connected nature of the transport and logistics industries makes them an attractive target for hackers, with potentially disruptive and costly consequences. Between June 2020 and June 2021, the transportation industry saw an 186% increase in weekly ransomware attacks. At the same time, regulations and cyber security standards are lacking – creating weak postures across the board. This podcast explores the key risks. Published April 2022.

Political risk: A fresh perspective
CIR’s editor, Deborah Ritchie speaks with head of PCS at Verisk, Tom Johansmeyer about the confluence of political, nat cat and pandemic risks in a world that is becoming an increasingly risky place in which to do business. Published February 2022.