Skip to main content

Article

The Artificial Intelligence Act

Understand the EU Artificial Intelligence Act's risk-based approach to AI regulation. Learn how organizations can prepare for compliance and mitigate risks.

The regulation is based on various levels of risk involved in the use of artificial intelligence (AI).

The standards follow a risk-based approach and set obligations for providers and users depending on the level of risk that AI can generate. The classification of high-risk areas include harm to people's health, safety, fundamental rights, and the environment. Artificial intelligence systems for influencing voters in political campaigns and recommendation systems used by social media platforms are also on the high-risk list.

General purpose AI models need to comply with transparency requirements, including but not limited to, disclosing that the content was generated by an AI algorithm, designing the model to prevent it from generating illegal content, and publishing summaries of copyrighted data used for training.

The law promotes regulatory sandboxes (controlled environments) set up by public authorities to test AI before its deployment, as well as exemptions for research activities and AI components provided under open-source licences.

EU citizens also have the right to file complaints with the relevant authorities about AI systems and to receive explanations for decisions based on high-risk AI systems that have a significant impact on their rights.

The implementation timeline of the EU AI Act is as follows:

  • August 1, 2024: The Act enters into force.
  • February 2, 2025: Prohibited AI restrictions start.
  • August 2, 2025: Rules apply for general purpose AI models, governance, confidentiality, and fines.
  • August 2, 2026: The remainder of the Act starts to apply apart from high-risk AI.
  • August 2, 2027: Full compliance for high-risk AI systems is expected.

How should organisations respond to the EU AI Act?

Organisations will need to assess the impact of the proposed AI Act on their business and business model, identifying where changes are needed and where further attention needs to be paid to current processes.

Trusted advisers can help organisations understand how this act can be applied within their existing framework by:

  1. Helping organisations identify and assess potential risks associated with their AI applications, differentiating between unacceptable, high, and low or minimal risks. Strategies can then be implemented to mitigate those risks accordingly.
  2. Defining how to comply with the specific rules and requirements for AI systems such as transparency obligations for systems that (i) interact with humans, (ii) are used to detect emotions or determine association with social categories based on biometric data, or (iii) generate or manipulate content such as “deep fakes”.
  3. Identifying their AI value chain (including importers, distributors, authorised representatives) and its obligations.

A comprehensive plan that covers all the new and upcoming legislation covered by the EU’s digital strategy can create opportunities as well as mitigate risk.

To learn more contact a Marsh representative.

The article is for information purposes only. Marsh makes no representation or warranty as to its accuracy. Marsh shall have no obligation to update the article and shall have no liability to any party arising out of this document or any matter contained herein. Any statements concerning actuarial, tax, accounting, labour, or legal matters are based solely on our experience as insurance brokers and risk consultants and are not to be relied upon as actuarial, tax, accounting, labour, or legal advice, for which clients should consult their own professional advisers. Any analysis and information are subject to inherent uncertainty, and the article could be materially affected if any underlying assumptions, conditions, information, or factors are inaccurate or incomplete or should change. Although Marsh may provide advice and recommendations, all decisions regarding the measures should be adopted are the ultimate responsibility of the client.