Skip to main content

Article

AI, ChatGPT, and the professional services firm – an enterprise risk?

This article considers the risk from these new technologies from various perspectives, and offers some thinking about how to adapt.

Marsh supports a significant number of professional services firms. Recently, these clients have started raising enquiries with us about ChatGPT, an artificial intelligence (AI) powered language model, as well as AI tools in general. This article considers the risk from this new technology from various perspectives, and offers some thinking about how to adapt.

Clients of professional services firms as users of free AI

Free public use of ChatGPT-type technology potentially challenges aspects of the traditional knowledge and experience advantage professionals rely upon to generate profit. The possibility of a lay person, with free AI support, undertaking certain tasks without professional support may further threaten business models. This is a strategic risk - which will develop uniquely for different client sizes and by service lines. The majority of material shared online is probably categorised as ‘know what’ - factual information and opinion on what the processes and rules are, rather than ‘know how’ – asking how to actually execute specific tasks. However, despite media reports indicating the latest generation AI’s apparent ability to pass professional exams, there is evidence that wrong answers are also often created. Given the speed of progress, it is probable that AI solutions will be reliable soon. Consequently, firms should assess the impact on services they offer and the likely increased frequency of limited retainers that may in turn require additional controls to manage risk. 

Professional services firms as users of free AI

It may prove difficult for professional services firms to control colleague’s use of external AI services. However, users need to be mindful that questions asked of AI systems could potentially reveal privileged or confidential information. Additionally, using these services for work-related tasks may also breach intellectual property rights. Even utilising such services to trawl and identify if others have made similar enquiries, might reveal an interest. Enquiry timing might reveal strategies, plans, and concerns, and those holding data stemming from enquiries may use it for their own purposes. Furthermore, it may possibly be obtained by bad actors, who could seek to exploit it or find ways to poison the data pool to manipulate results. Firms may need to modify existing employment procedures to explicitly clarify that such usage is not permitted, if that is the position chosen. 

Changing service delivery by professional services firms

ChatGPT may give a lay user confidence to take steps themselves that a professional might otherwise undertake. However, current reliability of results is questionable.  Many professional services firms deploy AI to support clients with more interactive FAQs and some basic services. For larger firms, pressure from clients for efficiency and practitioner’s own use of AI (as part of internal support), have existed for some time. For example, accounting firms use AI to review documents such as board minutes and leases.

The professional service provider still has significant value to offer, despite AI’s free information and expertise. Firms can provide reliable operating processes and deal with situations that are more bespoke, and potentially use AI products for more standard situations. Indeed, many firms are investing heavily in database access to enable clients to self-serve with advice. As an example (although not necessarily using AI), one law firm provides access to data on the average settlement size for different types of employment allegations. Rather than seeking individual advice, the user enters the location and allegations to receive an idea of average reported settlements.

Are risks of providing these services covered by professional indemnity insurance?

For law firms regulated by the Solicitors Regulation Authority (SRA), we would expect professional indemnity claims arising out of reliance on such advice to fall to the wide terms of cover provided by the SRA Minimum Terms and Conditions. The terms of this compulsory minimum cover are considered the broadest of any professional indemnity insurance (although even wider cover can be negotiated). More generally, it would be prudent for professional services firms to consult with their broker and regulator. This can prevent potential surprises about whether claims are covered and if the service was compliant. Requirements of professional regulators and the Information Commissioners Office (ICO) will also need to be heeded. Accounting and surveying bodies are at various stages of considering this, but in the meantime, the ICO has issued specific guidance. 

Thinking about both sides of the issue – users and providers

Overall, we consider that there are significant risks both as a provider and user of AI services, which ought to be monitored and managed appropriately.

Professional services firms as providers of internal AI services

Firms are developing support systems for colleagues internally. More logical and higher quality search access and solutions to policies, procedures, and ‘know-how’ may be extremely useful. However, a key issue is the ongoing effort required to maintain these systems with up to date information. Additionally, it would be unsurprising if some clients seek to limit use of highly sensitive information to a particular work group, albeit this is not a new issue. If security is breached - and internal AI compromised - the usage data or corruption of data and AI based results may also create reputational risks for the firm and clients. 

Professional services firms as providers of external AI services

As this is a significant area of risk, it may be useful to treat what is being provided as a product.  As a reputational risk - and a novel area - we believe it is worth considering whether use of these tools and creation of products creates fundamental new hazards. 

We have opted to use the bow tie risk tool as a lens to consider prevention and mitigation. 

Below is an example of the tool we developed for law firm cyber risk:

Most users find it helpful to both define and order the key areas of the diagram: 

  1. Identify the ‘hazard’.
  2. Define the ‘top event’ – how the ‘hazard’ becomes problematic or uncontrollable.
  3. Identify the key ‘threats’ enabling the ‘top event’, and ‘barriers’ that address those ‘threats’.
  4. Consider the ‘consequences’, and position appropriate ‘mitigants’ where possible.

In this case:

  1. Hazard/Top event – does use of AI or creation of AI products create a fundamental change to the source of risk (hazard)? ?
  • Use of AI: In our opinion, AI use does not create a new ‘hazard’ for firms, it remains the holding of sensitive data. The ‘top event’ is that control/security of the data asset fails and loses value or publication damages the client’s business.
  • Creating and selling AI products: However, creation and distribution of AI products for clients does create a new ‘hazard’; which in our view requires greater thought.

Product risk is familiar to some professional services firms; particularly IT developers and brokers. The Financial Conduct Authority (FCA) publishes guidance on this regularly. Although the focus of much of this is consumer exposure, the governance methodology is worth considering. Specifically, brokers selling products are expected to install governance and oversight processes to ‘design, approve, market, and manage products throughout the products’ lifecycle to ensure they meet legal and regulatory requirements.’ Historically, software developers and electrical engineering insurance policies have often limited or excluded claims completely related to product performance failure. Losses arising from negligently made claims about product capabilities in sales processes, were often not covered. It should also be noted that much of the latest FCA Consumer Duty rules will apply to retail customers and SME business, not just the small individual consumer from 31st July 2023. Although expectations of other regulators about duties to consumers might differ in relation to AI products, we would be surprised if they diverge much from this approach.

As more pure professional services firms may now be offering what are considered products, we suggest implementation of roles, processes, and procedures for these products’ design, approval, marketing, and maintenance. This enables firms to control the risk, test products, and check feedback. The FCA model’s product lifecycle envisages the need for product withdrawal or enhancement to ensure current needs are met. This is an important issue. In a mature risk environment, such products are internally risk managed and licensed for a set period; after which they need to re-qualify through the compliance process. Simply, it is understood that without ongoing review and updating, the product will fail.

In financial services, governance, assessment, and product refreshes were developed in response to market failures - particularly of financial services products, such as endowments and finance insurance. Professional indemnity insurers of professional services firms have experience of suitability issues arising when homogenised advice is given to large numbers of clients - and the product does not perform as expected. In the consumer space, these have created a significant number of claims, usually under £100,000 in value. 

Firms that are designing and delivering AI products to clients should be cognisant of this risk. Subsequently ensuring there is robust product design and management with ongoing testing at least annually; possibly more often depending on feedback and changes. Providing a governance structure for a product, or putting the product through such a process, is sometimes recognised late in the product development cycle, becoming an unwelcome drag on product launch. However, experience shows that it is a necessary step.  

For large professional services firms, an AI product is unlikely to generate wrong results for tens of thousands of users, making relatively modest claims - as happened in financial services. However, if outputs are wrong, the product could result in identical deficient advice being provided to multiple clients in a short period of time, without much chance of detection. 

Reviewing the bow tie model, we can consider what barriers are in place to prevent product failure and how they relate to threats. A common problem can occur when the threats alter and the system does not detect that a barrier has been breached. If the overall system can detect that the barrier has been breached - then according to the model - this is an ‘escalation factor’. An oversight review of the model should then be triggered along with a potential re-design of the ‘barriers’. Without this review, the likelihood of widespread product failure is more significant.

Drawing on previous involvement in the development of on-line service models, the cost of maintaining and testing product suitability is often significant; potentially eroding the apparent profitability of such approaches. There is also a governance issue regarding who should be responsible for ongoing maintenance and testing - and if they will be independent enough and motivated to undertake the role. These issues are often unpopular with innovative thinkers, who are attracted to creating novelty by leveraging know how.

It may appear attractive and innovative to create opportunity from transforming professional services and offering what have traditionally been bespoke services as a product. However, professional services firms must develop more back office assurance and infrastructure to support delivery of high quality service through such products. Maintenance, design refresh, and testing must be factored into the cost, in order to manage product risk.

Meet the authors

John Kunzler

John Kunzler

Managing Director

Victoria Prescott

Victoria Prescott

Senior Vice President