Skip to main content

The European Union’s proposed artificial intelligence act

The regulation, which is currently being debated and amended, is based on various levels of risk involved in the use of artificial intelligence (AI). The European Commission and member states have decided to promote excellence in AI by joining forces on policy and investment.

The regulation, which is currently being debated and amended, is based on various levels of risk involved in the use of artificial intelligence (AI).

The standards follow a risk-based approach and set obligations for providers and users depending on the level of risk that AI can generate. The classification of high-risk areas has been expanded to include harm to people's health, safety, fundamental rights, and the environment. Artificial intelligence systems for influencing voters in political campaigns and recommendation systems used by social media platforms have also been added to the high-risk list. This proposed legislation is still under development and is evolving through consultation.

Generative foundation models should comply with additional transparency requirements, including but not limited to, disclosing that the content was generated by an AI algorithm, designing the model to prevent it from generating illegal content, and publishing summaries of copyrighted data used for training.

The proposed new law promotes regulatory sandboxes (controlled environments) set up by public authorities to test AI before its deployment, as well as exemptions for research activities and AI components provided under open-source licences.

Proposals include strengthening the right of citizens to file complaints about AI systems and to receive explanations for decisions based on high-risk AI systems that have a significant impact on their rights.

How should organisations respond to the EU act on artificial intelligence?

Organisations will need to assess the impact of the proposed AI Act on their business and business model, identifying where changes are needed and where further attention needs to be paid to current processes.

Trusted advisers can help organisations understand how this act can be applied within their existing framework by:

  1. Helping organisations identify and assess potential risks associated with their AI applications, differentiating between unacceptable, high, and low or minimal risks. Strategies can then be implemented to mitigate those risks accordingly.
  2. Defining how to comply with the specific rules and requirements for AI systems such as transparency obligations for systems that (i) interact with humans, (ii) are used to detect emotions or determine association with social categories based on biometric data, or (iii) generate or manipulate content (“deep fakes”).
  3. Identifying their AI value chain (including importers, distributors, authorised representatives) and its obligations.

A comprehensive plan that covers all the new and upcoming legislation covered by the EU’s digital strategy can create opportunities as well as mitigate risk. Read more about the impact of the EU Data Governance Act.

Sign-up for the series and stay informed

This is just the second episode in our insightful and informative series “Prepared for the unexpected: the dynamic risks series”. If you would like to be notified when the next instalment is available click here.

The article is for information purposes only. Marsh makes no representation or warranty as to its accuracy. Marsh shall have no obligation to update the article and shall have no liability to any party arising out of this document or any matter contained herein. Any statements concerning actuarial, tax, accounting, labour, or legal matters are based solely on our experience as insurance brokers and risk consultants and are not to be relied upon as actuarial, tax, accounting, labour, or legal advice, for which clients should consult their own professional advisers. Any analysis and information are subject to inherent uncertainty, and the article could be materially affected if any underlying assumptions, conditions, information, or factors are inaccurate or incomplete or should change. Although Marsh may provide advice and recommendations, all decisions regarding the measures should be adopted are the ultimate responsibility of the client.