AI: Clever clogs

With AI becoming increasingly mainstream, the nature of its potential risks grow in complexity and number. Before we can fully harness the benefits, there are a number of potential pitfalls to consider. Martin Allen Smith reports

The global economic growth that artificial intelligence (AI) is expected to provide by 2030 is a staggering US$15.7 trillion, according to analysis by PwC. AI is automating tasks that require human cognition, such as fraud detection and maintenance schedules for aircraft, cars and other physical assets, and it is augmenting human decisions on everything from capital project management to customer retention and market strategies for new products.

Globally, researchers and entrepreneurs are developing autonomous AI that will not
need human intervention to make even highly complex decisions, bringing the potential to create new business models across financial services, healthcare, energy and mining, industrial products, media and entertainment.

Like many much-heralded technology developments before it, some still remain sceptical that this is little more than a fad, set to be replaced by the next innovation, but the current state of play suggests that the scale of AI makes this less ‘tomorrow’s world’ thinking and more today’s priority. The driver is the significant benefits it could bring to business. According to research by Accenture, AI technologies are projected to boost corporate profitability in 16 industries across 12 economies by an average of 38 per cent by 2035.

As with any such reward, standing alongside are a host of equally large potential risks. Existing AI applications are built around so-called ‘weak’ AI agents, which exhibit cognitive abilities in specific areas, such as driving a car, solving a puzzle or recommending products or actions. With the first tangible benefits of weak AI applications already being felt, expectations for AI technology are rising, boosting the prospect for further large-scale investment in anticipation of the benefits of more human-like or ‘strong’ AI in future.

Risks and benefits will appear in the short- or long-term depending on how long it takes for ‘strong’ AI applications to be deployed in the real world. For businesses, the potential threats could easily counterbalance the huge benefits of such a revolutionary technology. According to the Allianz Risk Barometer 2018, the impact of AI and other forms of new technology already rank as the seventh top business risk, ahead of political risk and climate change.

Companies face new liability scenarios and challenges as responsibility shifts from human to machine. Meanwhile, increasing interconnectivity means vulnerability of automated, autonomous or self-learning machines to failure or malicious cyber acts will only increase, as will the potential for larger-scale disruptions and losses, particularly if critical infrastructure is involved.

This is where things really venture into the unknown. One of the challenges of assessing business liability and risk when AI fails is that courts assess liability and damages based on prior legal precedent. This means that AI-based systems will inextricably be judged by applying legal concepts and assumptions based on human involvement and outdated case law. Legal claims of negligence involve traditional human concepts of fault, negligence, knowledge, causation and reasonableness and foreseeability. There is uncertainty surrounding a scenario in which an incident has occurred when human judgment has been replaced with an AI programme.

One of the key benefits of AI lies in predictive analytics: the ability of certain AI software to analyse vast quantities of data and make predictions based on that data. However, the sheer scale of data sets that AI can process compared with humans means that arguably far more things are now ‘reasonably foreseeable’ to the growing number of companies that use AI to make strategic decisions. This means, potentially, that there is now a dramatic increase in the scope of what a company may be liable for.

The second challenge is that the appropriate standard of ‘reasonable foreseeability’ will become even harder for humans to judge due to the nature of AI. In the past, ‘reasonable foreseeability’ was judged according to the objective nature of the ‘reasonable (human) person’. The increasing use of AI promises to change this standard to what a company in the same industry with similar experience, expertise and technology would reasonably foresee. This raises two issues. Firstly, predictive analytics relies heavily on the breadth and size of big data sets too large for humans to process, which means it will be difficult for humans to judge what is ‘reasonably foreseeable’ for a given piece of AI software. Secondly, AI predictions depend entirely on the type of data it receives. This means that, unless two companies obtain the exact same AI software and feed it exactly the same data, even competitors with the same AI technology and markets may receive and be acting on wildly different information.

Earlier this year, Accenture launched an AI testing service, intended to help organisations to train and sustain their AI systems in order to avoid some of the potential pitfalls. “Testing AI systems presents a completely new set of challenges,” said Kishore Durg, senior managing director, growth and strategy and global testing services lead for Accenture. “While traditional application testing is deterministic, with a finite number of scenarios that can be defined in advance, AI systems require a limitless approach to testing. There is also a need for new capabilities for evaluating data and learning models, choosing algorithms, and monitoring for bias and ethical and regulatory compliance.”

AI-powered software will undoubtedly alter the digital security threat landscape. It could help to reduce cyber risk by better detecting attacks and yet paradoxically, also increase it if malicious hackers are able to take control. AI could enable more serious incidents to occur by lowering the cost of devising cyber attacks and enabling more targeted incidents. For example, the same programming error or hacker attack could be replicated on numerous machines. Or one machine could repeat the same erroneous activity several times, leading to a large-scale accumulation of losses.

The big problem for insurers lies in trying to foresee the hidden risks in such a potentially far-reaching technological development. Traditional coverages – such as liability, casualty, health and life insurance – will need to be adapted to protect consumers and businesses alike. Insurance will need to better address certain exposures to businesses such as a cyber attack, business interruption, product recall and reputational damage resulting from a negative incident.

AI raises concerns around personal data, particularly the extent to which this can be used to increase intelligence of agents. Data protection regulation in Europe already contains conspicuous limitations to adoption of AI systems. Businesses will need to reduce, hedge or financially cover themselves from the risks of non-compliance with new data protection regulations in future.

At the same time, AI will bring benefits to insurers as well as new risks. AI applications will improve the insurance transaction process, with many benefits already apparent. Customer needs can be better identified, policies can be issued – and claims processed – faster and more cheaply. Corporate risks, such as business interruptions, cyber security threats or macroeconomic crises, can be better predicted. Insights gained from data and AI-powered analytics could expand the boundaries of insurability, extending existing products, as well as giving rise to new risk transfer solutions in areas such as non-damage business interruption and reputational damage.

Some of the world’s biggest organisations are being open about the prospect that, lurking among all the potential benefits, AI presents some very real dangers. In its 2018 annual report, Microsoft stated: “As with many disruptive innovations, AI presents risks and challenges that could affect its adoption, and therefore our business. AI algorithms may be flawed. Datasets may be insufficient or contain biased information. Inappropriate or controversial data practices by Microsoft or others could impair the acceptance of AI solutions. These deficiencies could undermine the decisions, predictions, or analysis AI applications produce, subjecting us to competitive harm, legal liability, and brand or reputational harm.”

Such is the scale of change promised by AI developments, it seems set to remain a rapidly evolving landscape. Accepting that with the rewards comes significant risk is perhaps the first step towards mitigating some of the dangers.


This article appears in the September 2018 issue of CIR Magazine.

Download as PDF

More interviews and analysis


Contact the editor


Follow us on Twitter

    Share Story:

YOU MIGHT ALSO LIKE


COMMUNICATING IN A CRISIS
Deborah Ritchie speaks to Chief Inspector Tracy Mortimer of the Specialist Operations Planning Unit in Greater Manchester Police's Civil Contingencies and Resilience Unit; Inspector Darren Spurgeon, AtHoc lead at Greater Manchester Police; and Chris Ullah, Solutions Expert at BlackBerry AtHoc, and himself a former Police Superintendent. For more information click here

Modelling and measuring transition and physical risks
CIR's editor, Deborah Ritchie speaks with Giorgio Baldasarri, global head of the Analytical Innovation & Development Group at S&P Global Market Intelligence; and James McMahon, CEO of The Climate Service, a S&P Global company. April 2023