VIEW: AI waits for no man, neither should risk professionals

The Government’s AI summit at Bletchley Park put artificial intelligence at the top of the agenda recently, with the summit’s declaration listing regulation as one of the ‘critically important’ factors needed for it to ‘transform and enhance human well-being, peace and prosperity’.

The truth is, AI already is regulated in many ways, but not always directly. Take insurance, for example. AI is already being used in a wide range of activities, from calculating premiums to processing claims and identifying fraud. None of these actions are controlled by an AI-specific regulator, but they don’t need to be.

Underwriting decisions, for example, are already regulated by the Equality Act 2010, which forbids discrimination on the basis of ethnicity and gender and only allows exemptions based on age and disability when underwriters can show that their risk assessments are based on sound information.

The Financial Conduct Authority has emphasised the role the Consumer Duty will play in underpinning the Equality Act, saying it requires firms to “monitor whether any group of retail customers is experiencing different outcomes than other customers and take appropriate action where they do”.

The use of data to educate machines to make decisions about fraud or claims is also regulated. The General Data Protection Regulation and the Data Protection Act 2018 already require firms to use data responsibly and the Consumer Duty requires firms to “regularly monitor the customer support they provide to make sure there are no systemic issues that create unreasonable barriers or costs for customers”.

So, while it is important for politicians to ensure that there is a consistent international approach to AI policy, for firms that want to be known for being responsible innovators, standards of good practice are already clear: establishing clear, auditable processes, being alive to the potential of bias and operating in a culture of continuous improvement in which ethical humans are always in control.

The innovators at Bletchley Park did not wait for governments to tell them precisely how to do their job. Neither should professionals working with AI.



Share Story:

YOU MIGHT ALSO LIKE


The Future of Risk & Resilience with AI & Data
CLDigital's Co-Founder, Tejas Katwala, joins CIR Magazine to discuss how CLDigital is transforming enterprise risk and resilience. By integrating business processes, AI and data-centric strategies, organisations can move beyond compliance to proactive risk management – simplifying operations, strengthening resilience, and driving business performance. Listen now to explore the future of intelligent risk management.

Communicating in a crisis
Deborah Ritchie speaks to Chief Inspector Tracy Mortimer of the Specialist Operations Planning Unit in Greater Manchester Police's Civil Contingencies and Resilience Unit; Inspector Darren Spurgeon, AtHoc lead at Greater Manchester Police; and Chris Ullah, Solutions Expert at BlackBerry AtHoc, and himself a former Police Superintendent. For more information click here

Advertisement