By Gregory van den Top ,
AI Practice Leader, Marsh Advisory Europe
10/01/2025 · 5 minute read
Artificial intelligence (AI) has a rich history that dates to the mid-20th century when pioneers like Alan Turing and John McCarthy laid the groundwork for machine learning and intelligent systems. AI has since evolved from simple, rule-based systems to complex algorithms capable of learning from vast amounts of data. In Europe, the implications of AI in business are profound. Companies are leveraging AI to enhance operational efficiency, improve customer experience, and drive innovation.
However, AI in business also raises important considerations regarding data privacy, ethical use, and regulatory compliance. As organisations adopt AI technologies, they must navigate the complexities of the EU's legal framework , which emphasises transparency, accountability, and the protection of individual rights.
To successfully implement an artificial intelligence strategy and manage the associated risks, several steps can be taken, including:
1. Clarifying who is responsible for AI
AI risk management can often be ambiguous within organisations (see figure 1). Responsibility for AI-related risks may be distributed across various departments, such as IT, compliance, and operations without a clear designation of who is ultimately accountable. This lack of clarity can lead to fragmented communication, where different teams have conflicting priorities or fail to collaborate effectively.
Source: Airmic (poll of Airmic members taken on 27 August 2024).
To address these challenges, risk managers should define ownership and accountability for AI risk. This can be achieved by creating a cross-functional AI governance team that includes representatives from key departments, ensuring all relevant perspectives are considered in the decision-making process.
2. Evaluating the potential risks of AI
It is imperative to recognise the risks associated with AI implementation, including:
Organisations need to understand the impact of AI on their risk profile. For instance, reliance on large datasets for training AI models can heighten the risk of non-compliance with data protection laws, especially if sensitive information is mishandled or inadequately protected.
Additionally, algorithmic bias can lead to unfair or discriminatory outcomes, potentially damaging the organisation's reputation and resulting in legal repercussions. Regulatory non-compliance becomes a critical concern, as organisations must navigate evolving laws and guidelines surrounding AI use, particularly in regions with stringent data protection regulations, like the EU. These changes brought about by AI necessitate a comprehensive reassessment of existing risk management strategies to address the unique challenges posed by AI technologies.
To effectively mitigate and transfer the risks associated with AI implementation, risk managers should first conduct a thorough AI risk assessment that includes data-handling practices, algorithmic fairness, and regulatory compliance. They should evaluate how these risks affect their existing insurance portfolio, ensuring coverage and limits are sufficient. They should then consider transferring any remaining risks through insurance. By proactively addressing AI risks, risk managers can help their organisations harness the benefits of AI while minimising potential downsides.
3. Implementing AI training programmes and a communication strategy
A comprehensive training and communication programme is essential for articulating the benefits and expected outcomes of AI initiatives, while also equipping staff with the skills needed to work alongside AI systems. Meanwhile, feedback mechanisms allow for ongoing dialogue and adjustment between risk management and the business.
A lack of training and communication can hamper AI initiatives by creating confusion, resistance, and a lack of engagement among employees. When staff are not adequately informed about the purpose of AI, they may perceive it as a threat to their jobs, rather than an opportunity. And if employees are unsure how to use AI, they may revert to outdated processes, leading to inefficiencies and missed opportunities for innovation.
A comprehensive, cross-functional training and communication programme should provide employees with the knowledge and skills necessary to thrive in an AI-enhanced environment. This programme should include introductory sessions that explain the fundamentals of AI, its applications within the organisation, and the specific benefits it brings to their roles.
4. Monitoring the impact of AI
Establishing metrics to monitor the impact of artificial intelligence in business is essential to assess its effectiveness and make informed decisions, while adhering to EU legislation governing data protection and ethical AI use.
Evaluating the use and efficiency of AI in organisations is vital to ensure that these technologies deliver the intended benefits while complying with relevant regulations.
Without proper evaluation, organisations risk deploying AI systems that fail to meet business objectives or to provide a return on investment. For instance, an AI-driven customer service chatbot may be implemented to enhance user experience. But if it is not effectively trained or monitored, it could lead to increased customer frustration and dissatisfaction, that ultimately harms the brand's reputation. Furthermore, if organisations do not assess the performance of AI algorithms, they may inadvertently perpetuate biases or make decisions based on outdated data, leading to ethical concerns and potential legal ramifications.
Before establishing a robust monitoring programme for AI initiatives, organisations should define key performance indicators (KPIs) that align with their strategic objectives. KPIs may include:
Risk managers can then collaborate with data analysts and IT teams to develop a framework for data collection and analysis to monitor the programme. This framework should include regular reporting, such as monthly or quarterly reviews, to assess the performance of AI services against the KPIs. Risk managers should also advocate for the integration of feedback loops that allow for continuous improvement based on the insights gained from monitoring.
Risk managers need to embrace the opportunity to guide their organisation through the transformative AI journey and its potential benefits for businesses and society. By implementing an AI governance framework, recognising potential risks, providing employees with training, and monitoring performance metrics, they can effectively manage associated challenges and minimise risks related to data privacy in AI, algorithmic bias, and regulatory compliance.