Will AI replace contact centers? It’s a firm No – and here’s why


10 Jun 2024

Ashley Burton, Head of Product at Eckoh, talks about why AI won't be replacing contact centers. Instead, he hopes to see AI aid with routine tasks and help improve overall efficiency.

Since the launch of ChatGPT a couple of years ago, the tech world is abuzz with talk of AI taking over the world and taking away people’s jobs. And I can see why. The tech media tends to focus on the amazing promise of the technology, while everyday people rightly worry about their own circumstances and whether their livelihoods will be at risk.

There are currently around 3 million people employed in contact centers in the US. And one of the big ideas around AI is the notion of AI ‘agents’ that can perform actions at the command of customers. This sounds potentially scary but the technology has its limitations.

Don’t get me wrong, some of the things you can use AI for are incredible and I’ve already baked AI into my day-to-day workflow, personally and professionally. And yet, while AI makes me more productive, I’m also intimately exposed to its limitations.

I’ve been working in the contact center space since my first job out of university as a contact center agent back in the 1990s. I’ve spent most of my career in or around customer engagement, big data, automation, natural language understanding and most recently Generative AI. I’d like to use my background to talk you through what the tech is good at, where it falls down, and what this means for contact centers.

Let’s start from the beginning.

What actually is Artificial Intelligence?

It’s worth asking this question because far too much coverage in the tech media – and within marketing from technology firms – treats AI like mystical ‘woo’ with pictures of futuristic robots, when the reality is that it’s all just software doing fancy math. You can achieve amazing things with it, but it’s still just software and comes with all of the common pitfalls.

The term Artificial Intelligence tends to cover a number of key branches of technology:

  • Machine Learning is essentially complex statistics combined with computing and is typically used for predicting or forecasting use cases.
  • This space led to a subset of Machine Learning called Deep Learning. This is a much more complex approach with processes designed to mimic the way our brains learn with ‘neural networks’ trained on sometimes large and sometimes small amounts of data.
  • Deep Learning gave rise to the more recent development of Large Language Models (LLMs) such as ChatGPT, and image generation tools such as MidJourney. Much of this took a huge boost from a 2017 paper written by Google staff defining a new 'transformer' architecture (Attention Is All You Need).
  • Collectively, the field of Artificial Intelligence that is used to create new content is called Generative AI.

Is AI actually intelligent?

The simple answer is No. And no credible vendor in the space would suggest that their model was genuinely intelligent in the same way as humans. There is an entirely separate field of research working on Artificial General Intelligence but that’s still a long way off. The current approach of Generative AI is really just mimicking the way humans write and speak. And where you see signs of 'understanding', the model is simply making predictions based its training data.

Where Large Language Models excel is in performing tasks directly related to language. Essentially, they are huge statistical models of words built from all of the content scraped from books, newspapers and the Internet. The latest models have been trained on huge volumes of data, as much as 13 trillion ‘tokens’ (a token is a word or part of a word), representing a massive proportion of the publicly-available content ever written.

Peeking behind the curtain

Why does Generative AI work so well? This is largely because what we ask these models to do isn’t truly unique – and buried within the model is a pattern it can use to generate an answer. Because of this, it feels like the AI can talk to you like a human … it feels as if they actually know the answers to your questions. But in reality, the model is simply predicting what the next most likely words would be in a conversation like this. It’s drawing on its huge bank of words written or spoken by people – so inevitably it’s pretty good at sounding human.

Because these are language models, they’re really good at text processing tasks such as summarizing documents, translating text between languages or mimicking writing styles, and answering questions based on knowledge base content.

Doesn’t this sound a bit like chatbots?

Those of you that have been in the customer experience industry for some time will recall the start of the chatbot era. I was acquainted with IBM Watson reasonably early, spending time at IBM’s HQ and research center, getting to know Watson. I believe IBM had the best product in that era and my company genuinely helped some organizations to automate and triage contacts. But there were some big gaps.

Earlier chatbots were really good at handling well defined, structured tasks but didn’t do well with ’long tail’ queries. They also took quite a bit of work to put together and fundamentally consumers were unconvinced and felt like they were largely being given the runaround.

That was then, what about now?

The modern era of these tools may use Generative AI, but they’re still fundamentally chatbots. Using Large Language Models does give the production of these bots a huge boost in a few key areas.

Generative AI can make them handle conversational queries better through a process called Retrieval Augmented Generation, which is basically searching a knowledge base and getting an LLM to reword the answer. We also now have considerably better text to speech which could make voice bots more compelling, while Large Language Models can make the whole experience more conversational.

However, the reality is that we’re still talking about chatbots – just better chatbots.

It’s important to note that all technology goes in waves, this is covered in Gartner’s famous Hype Cycle that shows (as of 2023) that Generative AI is at the “Peak of Inflated Expectations” where initial successes and proof-of-concepts buoy interest. But this will quickly lead to the “Trough of Disillusionment” where people realize that the tech doesn’t yet match up to the marketing hype (alongside smart glasses, blockchain and cryptocurrencies).

2023 gartner hype cycle for emerging technologies

The big problem goes back to the earlier point, the AI neither knows nor understands anything. It’s also worth noting that the technology is young and fast moving. There are significant risks that AI-driven chatbots could provide false or inappropriate answers.

What are the risks of using AI for contact centers?

Generative AI is a young field and a brand new technology for contact centers. One of the greatest risks of using Large Language Models is that of ‘hallucinations’. The LLM’s entire goal is to complete the next word in the answer to a question. But, generally speaking, they don’t know the answer so they just invent something to say.

The LLM can’t even tell you that it has made up an answer, because it doesn’t know; what it’s giving you is the statistically most likely result. The next biggest class of risk is toxicity and inappropriate behavior, where the model is responding with language or answers that you would not want your customers to use.

Imagine a world where an AI solution talking to your customers invents its own refund policy: maybe instead it sells a product worth $60k for $1 or perhaps it just has a bad day and starts swearing at customers? All of these examples happened in the past year and even Google’s recent launch of AI Overviews has been advising people to eat rocks and put glue on pizza. If the company that effectively invented the current state-of-the-art tech in many areas can’t produce something bulletproof, then who can?

What about data security for AI?

As a processor of customer data, contact centers also need to strongly consider data privacy and security implications or they risk destroying their brand reputation and incurring significant financial loss through fines and remediation.

If your solution is passing sensitive data into a third party solution, you need to take all of the precautions required with any software solution. This can be a challenge, since you may wish to use AI to provide summaries of call transcripts, so you must ensure your provider is doing the right thing.

All AI providers need one thing above all else – data. If you’re considering using AI-driven services, you should ensure the provider isn’t storing and using your customers’ data to build their own models without your knowledge. Even if you’re happy for a provider to store your data for a period of time as part of building a tailored model for your business, you also need to make sure that the provider is not storing that data longer than the retention period that you’ve agreed and communicated with your customers. Everything must conform with the Right to be Forgotten and other requirements of privacy laws such as CPRA and GDPR.

It’s worth noting that while privacy legislation is fairly new in the US, five US states have active laws in place, three more will be in force the end of 2024 and a further 10 have been signed and come into effect over the following years (see the IAAP Privacy Legislation Tracker).

Beyond the risk of incidental Personal Data or PII being stored and used by model providers, you may also need to consider special classes of highly-valuable data. This can include payment card data that falls under the super-stringent PCI-DSS v4.0 regulations or banking data for ACH under the strengthening NACHA regulations.

The AI model providers are not payment providers and they are usually wildly ignorant of the restrictions placed on merchants that process payments. The best and easiest way to avoid the risk of losing sensitive data is not to send it to the AI in the first place. There are plenty of solutions out there, including solutions from my employer Eckoh that can protect sensitive data for AI solutions and also prevent it entering your contact center environments.

So, how can AI help in a contact center?

If you run a contact center, ask yourself the following questions:

  • Do you have the resources to address all of your customer queries effectively?
  • Are you able to easily recruit and retain high-quality agents for your business?
  • Would you like to give agents more time to focus on upselling products, answering complex queries, or supporting vulnerable customers (rather than answering run-of the mill questions)?
  • Could tools that enhance agent productivity be useful?

I expect most of these questions will resonate. And luckily, Generative AI could help with all of these things. The contact center space has always been a story of constantly-squeezed resources with supervisors and managers barely having time or budget to focus on the improvements they’d wish for. The post-pandemic world has left its mark – with recruitment and retention being even harder than before.

It’s best to think of AI as a ‘colleague’ that’s able to help out with certain tasks and take some weight off your shoulders, rather like having your own personal assistant. Many industries will embrace this approach; it’s why Microsoft has called its AI platform “Copilot” because it will work with you, not instead of you. AI can make us all more efficient if it’s harnessed and used safely.

Contact centers aren’t being replaced by AI then?

No, not now, not soon and I honestly don’t think it will ever happen.

The nature of the work that contact centers do will change over time, but it always does. Customers actually want support from people and not to be given the run-around by automated systems. And besides, the automation itself can only work if you’ve got a series of perfectly aligned, integrated, and standardized underlying systems, which no large organization ever has (and I’ve worked with dozens of them over the years).

We are in the middle of a revolution, but it’s spread slowly over time – where technology ultimately brings benefits in steps, rather than one giant terrifying leap.

Decades ago, automated voice services and IVR came along to route calls and automate tasks, but that didn’t kill contact centers. When self-service websites with FAQs came along, that didn’t kill contact centers. In organizations with lots of manual processing, Robotic Process Automation came along, and didn’t kill the need for agents. As chatbots rose up to conversationally answer customer’s queries, that didn’t kill contact centers either.

Generative AI will be a tool that contact centers can use to improve automation, create efficiencies, and ultimately make our lives better – but it won’t replace contact centers.

Ashley Burton

Head of Product, Eckoh

You can connect and follow Ashley on LinkedIn where he shares insights on AI and other important tech topics.

Connect with Ashley on LinkedIn

Ashley Burton Web Image Circle

Have any questions?
Get in touch