Skip to main content

Mercer Marsh Benefits

AI: Your employee health and benefits copilot

Employee health and benefits programs are a top people risk globally. AI can reduce burnout and costs, but challenges include data quality, governance, innovation, and access to care. Here’s how AI can be your employee health and benefits copilot.

Employee health and benefits programs need an overhaul. HR and Risk managers view the increasing cost of health and benefits to be the number one people risk globally according to new People Risk 2024 research, and most employees (82%) feel at risk of burnout as found in a recent survey, Health on Demand, of 17,000+ employees.. However, leading organizations can use artificial intelligence (AI) to bring down the risk of burnout and control costs.

AI systems and capabilities might drive better outcomes more broadly, but several forces may be holding it back. Realizing the full potential of AI requires high-quality data, comprehensive governance plans, technological innovation, and improved access to care.

How did health and well-being become unsustainable?

The rising cost of care strains healthcare delivery systems and everyone who depends on them. One measure to gauge these expenses is medical trend rate: the year-over-year change in claims cost per person. Mercer Marsh Benefits (MMB)’s 2024 Health Trends report found that for 2024, insurers predict an 11.7% increase in medical trend rate outside the US — potentially the fourth double-digit increase in as many years.  Employers are aware of this problem; nearly 4 in 10 (37%) are concerned about medical costs increasing beyond general inflation.

Why does the medical trend keep climbing? According to MMB’s Health Trends 2024 report, 86% of insurers believe much of the 2023 increase was due to medical inflation: the rising cost of health equipment and services. Providers would echo that sentiment, with the American Hospital Association noting that supply chain pressures drove up the cost of medical supplies by 18.5% from 2019 to 2022.

On the services side, the healthcare talent pipeline is getting weaker and more expensive according to People Risk 2024 research. The sector’s HR leaders report that skills shortages and rising labor costs are their top two workforce challenges this year, and nearly half of executives (46%) believe their current talent models will struggle to meet demand. The talent agrees: according to the survey that underpins Mercer’s Global Talent Trends 2024, where four in five healthcare employees feel close to burnout and 29% are planning to quit.

Benefits utilization has been another key cost inflator, according to three out of four insurers in 2023. This dynamic makes it more costly for those managing health plans to manage and pay claims, which in turn leads to higher premiums. Employers can respond by optimizing their health plan design for high-quality care and smart cost-sharing. What’s more, leaders can effectively guide employees toward desired behaviors; encouraging outpatient treatment within provider networks could be an ideal way to start.

Despite rising costs, health risks and operational challenges, insurers believe that employers in 2024 will prioritize making plan improvements to attract and engage talent (57%) over reducing plan coverage to manage cost (43%). It’s a noble ambition, but for such an investment to yield returns — especially in today’s climate — the whole system needs a reset.

Realizing the promise of AI in health and benefits

Armed with the power to learn, analyze, predict and create, AI can help solve some of the biggest problems in health and wellness. Given the rise of large language models (LLMs), AI has the potential to supercharge employee health and benefits through increased efficiency and so much more. Productivity gains may be stealing the headlines today, but it’s in AI’s ability to predict and personalize that we see the promise of a better tomorrow.

AI’s potential impact on employee health:

For employers and their people, AI has the potential to drive savings and better patient outcomes. Potential applications include:

Some generative AI tools can create “digital twins” or models, such as images of healthcare facilities and map out floor plans to help optimize workflows and operations. Other systems can schedule appointments, predict wait times, and field common patient inquiries to maximize efficiency and reduce the burden on workers.

While medical LLMs such as Google’s Med-PaLM are still in their infancy, they could one day support clinicians with triage and other tasks. Small language models can also fuel portable aids with less computing power, potentially putting more solutions in the hands of less-experienced workers.

Image-based generative AI tools can enhance and restore medical imaging (e.g., X-rays, MRIs and CT scans) for better visibility, improved diagnoses, and informed decision-making. Yet outsourcing this important work to machines could introduce major risks (more on that below); thus, AI-powered tools are far more likely to complement healthcare professionals and not replace them completely.

Healthcare decisions depend on many factors that vary from person to person. AI allows for advanced risk stratification by analyzing and monitoring patient data, predicting who needs what care, and helping clinicians target resources for better return on investment for improved patient outcomes.

Amid the shortage of mental health therapists today, chatbots and other digital platforms can provide encouragement, guided meditations, and even virtual reality for certain types of therapy. It’s unclear whether AI will ever be approved to become an on-demand counselor, but it could inform a human practitioner’s judgment.

AI-powered apps and wearables can track biometric data and other inputs for unhealthy lifestyles, then provide “nudges” such as mobile alerts to encourage behavioral change. 45% of employees find these digital tools helpful, and the proactive approach helps prevent downstream costs.

AI’s potential impact on employee benefits:

For benefits professionals and the employees they support, AI offers a range of ways to provide relevant health benefits, elevate the employee experience and improve access to care. Some possible use cases include:

When asked what would most improve their compensation, 45% of workers chose more types of rewards and personalization. Benefits experts comb through stacks of data to determine what’s available in the market, which options their firms can provide, and whether employees want or need certain programs. AI can analyze it all in a fraction of the time, and offer suggestions to support more informed decision-making.

Many employees don’t fully utilize their health benefits, and it isn’t for lack of interest. Just three in five employees report having a seamless, one-stop experience with benefits technology. An AI-powered front end can help guide workers through the benefits ecosystem more effectively and nudge them toward decisions and resources.

Creating on-brand and engaging communications takes time — a big ask for busy benefits professionals. Generative AI tools can help craft mailers, web copy and emails, for a consistent benefits experience that informs, inspires, and resonates with different persona groups for maximum impact.

Speaking of time-strapped benefits leaders, imagine how much time could be saved by automating responses to routine benefits inquiries. Chatbots are already augmenting support models to field these common asks, and even escalate more sensitive issues to human experts in real time. As we continue to chat with the bots, AI will help aggregate our responses and interests for insights that drive action.

AI’s potential impact on providers and insurers:

AI’s potential impact on health and insurance firms can stimulate cost savings and better outcomes for employers and people. How might AI impact the ROI on benefits spend?. When evaluating plans and provider networks, consider these use cases:

Clinicians often cite electronic health records (EHRs) as a cause of job dissatisfaction and burnout. AI tools such as DAX Copilot can help manage EHRs to reduce the burden on provider operations.

Given the pace of change in their field, healthcare professionals are always learning. Generative AI can produce virtual trainings for patient interactions, hands-on surgical procedures, and recertification exams — ultimately fueling better outcomes.

LLMs can handle research and development tasks at greater volume and speed than humans alone. They even design new drugs, accelerate clinical trials, and generate synthetic patient data for low-risk testing. These efficiencies could yield big savings for customers.

In claims administration, AI might facilitate prior authorizations, confirm basic policy information, and approve or deny treatment based on coverage details. AI can also supercharge claims audits and analytics to help identify fraud, waste and abuse. Some of the savings from these controls can be passed on to employers and claimants.

Busy providers have limited time to interact with patients, analyze symptoms and provide diagnoses. This time crunch could potentially lead to harmful mistakes and delays. Overtime,  AI has the potential to support physicians in more accurate diagnoses — which would benefit everyone.

Patients’ communication preferences often vary by age, culture, socioeconomic status and other factors. Generative AI can translate health communications into different formats, channels, styles and languages to maximize patient engagement, accessibility and health efficacy.

Some of these applications are too new and untested to implement safely today, but it certainly changes how we think about our current tech landscape and the benefits ecosystem. To wield AI effectively, leading institutions are not just embracing the art of the possible — they are taking steps to mitigate known and emerging risks.

Top-of-mind concerns about AI adoption in employee health and benefits:

Some concerns about AI are already in focus. We know LLMs can hallucinate, producing flawed outputs without warning and making suggestions that, without validation, can have dire consequences. We’ve seen early versions of certain tools echo the biases that lurk in their training data. But as we connect these systems to patient records and life-or-death decisions, the stakes couldn’t be higher. Here are the key risks and roadblocks to watch for.

Data is the backbone of AI — LLMs need droves of it for peak performance. Yet healthcare and benefits data is highly regulated, and different countries have different rules for using and sharing it. Without a universal standard, these regulations could make it immensely difficult to build and use AI tools for healthcare needs.

Data quality is another big issue. Healthcare data comes in multiple formats, and merging or converting between them could increase the risk of errors. Biased AI training data may not reflect certain patient populations or the latest standards and findings — potentially driving flawed and high-risk decisions. What’s more, a lack of transparency in the sources of training data can make it difficult for humans to fact-check an AI model’s outputs.

One solution to the data dilemma is synthetic data, artificial information that’s created electronically to support predictive analytics, software development and machine learning. Synthetic data is faster and cheaper to acquire than real-world data; it helps close the gaps in incomplete datasets, and even mimics patient data without the personal identifiers that fuel privacy concerns.

Consider the issue of medical indemnity liability: if health and benefits workers use AI to help them counsel, diagnose and decide, who bears responsibility when something goes wrong?

Given its potential impact on public health and human rights, AI for health and well-being is subject to tight regulations and scrutiny from officials. A number of governments are rolling out their own laws, though the European Union’s (EU's) AI Act is perhaps the most comprehensive to date.

The Act lays out several functions and features of AI systems that would qualify them as “high-risk” and thus subject them to additional standards. AI tools that control access to health benefits, support critical treatment decisions, or use biometrics for certain tasks are all deemed “high-risk.”

To comply with regulations like the EU’s, companies can strengthen controls and decision-making to close emerging governance gaps. Since the Act refers to “use cases” in how LLMs and domain-specific tools are applied, greater attention to these areas will be required. Organizations may also need to strengthen their ethical AI policies to ensure a human-centric approach to AI adoption, with diverse input and potential risks embedded into early thinking.

One popular critique of AI regulation is that it curbs investments and risk-taking, which drive innovation. Yet given the outsized role of healthcare and benefits in people’s well-being, the “safe to fail” approach that works for some industries could have dire consequences for public health and patient confidentiality. For healthcare applications, it would be better for AI developers to follow the medical mantra: “first, do no harm.”

From executives to investors and politicians, the number of different players in health and well-being also makes it difficult to spark change. Cutting-edge AI startups, for instance, face stiff competition from legacy software vendors controlling the market.

Socioeconomic differences can drive massive health disparities. People from rural, low-income and/or underrepresented communities tend to face more barriers — financial, linguistic, technological, educational and logistical — to effectively accessing healthcare resources, including benefits. The advent of AI models that support more languages than ChatGPT may close knowledge gaps, but not developmental ones.

In theory, healthcare delivery systems could use AI to bridge these gaps. AI can support cost savings, personalization, translations, and efficiencies so more patients can benefit.

However, AI’s potential to fuel bias and discrimination is especially problematic in health and benefits. It could help identify and refuse coverage for high-risk populations and individuals — those who need care the most. Further, low-quality training data that doesn’t include certain groups’ medical history could inadvertently drive decisions that aren’t in their best interest.

Optimizing healthcare and employee well-being

Busy benefits professionals — and the vendors they choose — need the kind of human-centric productivity that only work design, assessments and upskilling can provide. AI can help solve the puzzle, but only digital-first cultures will unlock its full potential. For insights and support in building these cultures, contact a consultant.

Since being digital is a firm-wide endeavor, one potential challenge is resistance to change. Employees might avoid using AI-powered health resources. Older workers, who tend to file more claims, might be especially AI-averse. By using generative AI to personalize benefits packages and related communications, leaders can more effectively build buy-in and take-up with different persona groups.

For employers more broadly, comprehensive, affordable healthcare, including mental health, and a robust benefits strategy, including active cost management, can boost the corporate immune system by helping identify, predict, and mitigate risks to both the enterprise and its people. Organizations can use health benefits and other rewards to address these risks, do the right thing for employees and society, and even build trust and equity in the process. Discover how solutions from MMB can lift employee well-being programs with insights and analytics — or, get in touch with an experienced consultant to learn more.

The contents of this article are intended to convey general information only and not to provide legal advice or opinions.

Contributors:

Dr Luke James

Luke James

Workforce Health Leader, Mercer Marsh Benefits Europe

  • United Kingdom

Dr Katie Brown

Kate Brown

Senior Health and Benefits Consultant, US Mercer

  • United States

Placeholder Image

Kate Bravery

Advisory Solutions and Insight Leader, Mercer

  • United Kingdom

Related insights