The benefits of adopting AI in customer service are widely shared. With the promise of efficiency, speed and reductions in operational costs, it’s little wonder that companies are signing up. However, there is a risk that removing human interaction results in a loss of trust and a risk of harm. So, we’re asking if AI will replace customer service agents.

AI is being promoted as an efficient and cost-effective method of streamlining customer service. This technology operates 24/7 to enable customers to access support at times that are convenient to them. It can help call centres handle high demand and provide a fast resolution to common issues.

Many organisations have already embedded AI into customer service operations and these include:

  • Chatbots on websites to answer common questions and reduce call volumes.
  • Pattern analysis and real-time monitoring to detect unusual activity and identify potential issues such as a stolen bank card.
  • Automated processes that speed up customer account logins, ID verification and payment processes.

According to Zendesk Customer Experience Trends Report 2024, 70% of customer experience leaders believe ‘generative AI in customer service is making every interaction more efficient’. The report also states that generative AI will ‘accelerate the delivery of more humanised journeys, ones that leverage data to feel personable and interactive’. Interestingly, this suggests that technology can provide more personal communication than a person.

The promotion of AI suggests that technology will be able to handle routine enquiries and processes, freeing up call agents to deal with more complex issues or specific customer needs. It evokes an image of humans and technology working in harmony to secure the best outcomes for customers.

However, does Metrigy’s AI for Business Success 24-25 offer a truer picture of events? In a survey of 697 companies, 36.8% stated that adopting AI had reduced employee headcount. In addition, 55.7% stated that it would reduce new hires. This suggests that many companies are seeing AI as a replacement for call handlers and a means to reduce staff costs.

This can be beneficial to customer service in areas where there is a shortage of agents and it is difficult to recruit and retain enough employees to meet demand. However, a reduction in staff also raises the question of who is on hand to handle the complex issues or specific customer needs.

In a bid to drive efficiency, have the opinions and preferences of customers, particularly vulnerable customers, been considered? Could a customer who struggles to understand or use the technology be at risk of harm?

In contrast with the business perspective is a recent study by Gartner. It revealed that the majority of customers do not want companies to use AI in customer service. Of the 6.000 customers surveyed, 88% of respondents have major concerns about AI. What’s more, half stated they would consider switching to a competitor if they discovered a company was using AI for customer service.

When using AI, customer’s experiences aren’t always streamlined.

We’ve heard frustrations around:

  • Typing lengthy requests into a chatbot only to be told that it doesn’t have the answer and you need to call.
  • Self-service systems taking longer to manage a process or resolve an issue than a human interaction.
  • Impersonal services that make it harder to reach a real person.

The Gartner report findings include customers’ concerns about job losses, getting the wrong answers, data security and unfair treatment. In addition, many customers simply enjoy social interaction with another person. They don’t mind if a transaction takes longer because it’s a chance to have a chat or get a bit of help to sort something out.

Supporting the Gartner report is Cavell’s 2024 Voice of the Consumer Report which revealed that 44% of consumers think the quality of customer service is worse now than three years ago. Additionally, over a third (35%) of adults in the UK feel that chatbots and automated systems are bad at customer service and speaking to a human is still the fastest and best way to resolve a customer service issue and receive a positive outcome.

For firms regulated by the Financial Conduct Authority, there is a requirement for customer preferences to drive decision-making. Therefore, work needs to be done to understand the customer’s perspective. On this point, we’d like to highlight the need to speak to customers facing vulnerable circumstances that put them at risk of potential harm.

Equally, this isn’t just a regulatory requirement. If customers are willing to take their business elsewhere, ignoring their preferences could impact the bottom line.

Clear and open communication can help to alleviate some of the concerns. Therefore, where AI is in use or is being adopted, let customers know. Explain the benefits to them, reassure them with information on data protection and let them know about alternative options.

The benefits of AI in customer service present a compelling case to senior managers and stakeholders. However, are boardroom decisions being made without consideration for, or the involvement of, customers? We can’t ignore the potential impact on customers with additional needs or complex enquiries. So, what do your customers want?

About the author.

Helen Pettifer FRSA.

Helen Pettifer is Director of Helen Pettifer Training Ltd and a specialist in the fair treatment of vulnerable customers.

She has a background in call centre management and is committed to customer service excellence. Her training ensures front-line staff gain the awareness and resources to confidently identify and respond to signs of vulnerability.

Helen Pettifer is a British Standards Institution (BSI) associate consultant for BS 22458: 2022 Consumer Vulnerability, a Mental Health First Aider, a Suicide First Aider, a Dementia Friend, and a Friends Against Scams Champion. Recognised as a changemaker, she was invited to become a Fellow of the Royal Society of Arts in 2022.

Connect with Helen Email Helen