FERMA calls for urgent debate over AI ethics
Written by staff reporter
European risk management association FERMA has welcomed the appointment by the European Commission of experts for the High Level Group on Artificial Intelligence (AI HLG) and calls for urgent attention to two priorities for European business.
The group will support the implementation of the European strategy on AI, including the development of ethical guidelines by the end of this year. There are currently no clear ethical rules for the use of data generated by AI tools, and the AI guidelines will take into account principles on data protection and transparency.
President of FERMA Jo Willaert says, “FERMA stands ready to bring its unique expertise in enterprise risk management methodology and tools, such as risk identification and mapping, risk control and risk financing, to the discussion so we can manage the threats and opportunities posed by the rise of AI to our organisations and society within acceptable risk tolerances.
“FERMA argues that the new possibilities offered by AI must remain compatible with the public interest and those of the economy and commercial organisation. AI is already a reality in many organisations and it is going to disrupt our comprehension of the future."
The risk management association is calling on the group to address the two following priorities for corporate organisations:
⦁ Draw a clear line between the opportunities of AI technologies and the threats posed by the same technologies to the insurability of organisations as a result of over-reliance on AI during decision making processes.
⦁ Define ethical rules for the corporate use of AI not just for employees but also suppliers and all actors of the value chain. AI tools will allow increased and constant monitoring of a very high number of different parameters. The risk management profession believes that this greater use of data could create concerns among stakeholders and risks to reputation.