November 2024
Security and Customer Experience: Time to Prepare is Now
November 7, 2024
Written by Eric Krapf, General Manager and Program Co-Chair, Enterprise Connect Publisher
If your IT role includes contact center/CX technology, you’re almost certainly struggling with the question of AI adoption. The promise is undeniable, but the challenges come at you on all sides—from the customer experience itself to new security challenges that you’ve probably never had to consider before.
For example, this article from CX Dive cites a Five9 survey that found, “Almost half of respondents say they don’t trust information from AI-powered customer service chatbots,” and three-quarters of consumers prefer speaking to a live agent versus interacting with a chatbot. So before you even get to the point of evaluating specific AI applications, IT has to work in tandem with the business to come up with approaches that can overcome this reluctance.
Furthermore, an AI strategy that focuses too heavily on deflection – the holy grail for many top business executives – is likely to be counterproductive, the article suggests. It notes that abundant cases will remain in which a customer must be handed off from AI to a human agent, and, quoting analyst Julie Geller of Info-Tech Research Group, “customers feel more at ease with AI when it’s a well-integrated, reliable part of the service experience.”
Speaking of deflection, we’re also seeing more evidence of the risks inherent in handing customer contact over to AI chatbots and similar applications. Even if you got the interaction perfect from the enterprise’s side – you built a chatbot that was super-accurate, immune to hallucinations, and wide-ranging in the expertise it could provide—you’re still susceptible to bad actors outside your enterprise. Meet “Deceptive Delight.”
Deceptive Delight is a “prompt injection” attack against the large language models (LLMs) that power chatbots and other AI applications. As this Dark Reading article explains, prompt injection attacks are a kind of evil twin of prompt engineering, using prompts that evoke harmful responses or uncover information that shouldn’t be revealed.
Like any type of security attack, prompt injection is only growing in sophistication, and Deceptive Delight is an example of how this hack is already evolving. Dark Reading notes that with Deceptive Delight, “the assault on the [AI] guardrails is progressive and the result of an extended conversation with multiple interactions.”
In other words, contact centers in the age of AI are a lot more than call routing and IVR menus. Helping you gain a more sophisticated, in-depth understanding of the new challenges and risks is a big part of Enterprise Connect 2025’s programming around the contact center/CX. On the Conference’s opening morning, Monday, March 17, we’re running a deep dive on Customer Engagement, in which we’ll feature 3 sessions aimed at helping you understand how technology directly applies to customers’ experience. The sessions focus on bridging the gaps in customer interactions; how to fix declining customer satisfaction; and how customer journey analytics could help guide your enterprise.
When it comes to security, we’ve got a couple of sessions to help you grapple with the new challenges around AI. Amy Stapleton of Opus Research will get into the weeds of specific security attacks like prompt injection, that are now a part of your world, like it or not. And Irwin Lazar of Metrigy will lead a discussion of the strategic issues around security and compliance in an AI context.
Customer-facing interactions and security are 2 of the biggest issues as AI makes its way into the contact center. I hope you can join us in Orlando the week of March 17, 2025, for a wealth of information, insights, and best practices to help you prepare for this new world—sign up now for the best price!