By Sid Victor, Head of Support Services at Movate
AI’s new inflection point, generative technology, has captured the world by storm and could be the next giant leap to unleash the innovation and productivity we need. The market will reach over $109B by 2030, with a projected CAGR of 35.6%. ChatGPT’s meteoric rise is attributable to generative AI’s capability to augment human effort. Enterprises envision building better customer relationships and crafting more engaging and personalized experiences by tapping the prowess of this disruptive technology.
The Future is Unfolding
The talk of the town is about leveraging this technology with “human-in-the-loop” workflows, automating repetitive tasks, creativity, content creation, supercharging human support, and finetuning Large Language Models (LLMs) to specific domains, enterprise use cases, and customer scenarios.
Going by a survey of global business leaders, AI foundation models will play a pivotal role in enterprise strategies over the next three to five years. An estimated 60% of IT leaders are looking to implement Generative AI. Existing tech service providers have their roadmaps chalked out to accelerate the advancement of existing AI platforms infused with Generative AI models. OpenAI has released API for accessing their flagship AI models. Vendors are exploring API integrations with the latest generative LLM models (like Stable Diffusion, DALLE-E) and are inking partnerships with the pioneers to level up their CX capabilities.
The landscape of customer support strategy is witnessing a shift as benefits of Generative AI unravel use cases with a human-centric approach. Around 95% of enterprise leaders believe generative AI is ushering in a new dawn of enterprise intelligence. The game-changing technology will likely transform CX to provide unprecedented customer delight across service operations. ChatGPT, for example, is known for being content-oriented, having deep language understanding, contextual response generation capabilities, and flexibility for open-domain conversations.
The X-Factor in Support
The critical question is this: How will the new advancement in AI outperform the present new-age technologies in a customer support context? What kind of newness does the technology deliver when measured against the yardstick of customer experience?
A piece from The Wall Street Journal cited how enterprises are leveraging the significant breakthrough in NLP to make customer service bots even smarter. Over the years, we witnessed how the journey has been from linear chatbots to cognitive, conversational AI, and now the step up with generative bots—highly adaptable and interactive agents. LLM-powered chatbots facilitate conversations and tap knowledge gleaned and retained from previous interactions.
We now enter the language proficiency era that marks the start of language-driven generative models that are proficient in tackling customer support interactions at a humanized level of maturity. Generative bots deliver supreme contextual and intricate language understanding compared to costly, rigid, manually configured FAQ chatbots that return many articles to read. The “out-of-the-box” models mine deeper into the troves of data from knowledge bases and untapped touchpoints. Automation takes a turn for the better as models deliver deep predictive intelligence and content aggregation capabilities.
As human agents receive assistance from co-pilots, the technology reduces manual effort. It amplifies agent experience—NPS & CSAT monitoring, extracting relevant info from multiple articles, and generating human-like summaries. Gen AI-powered solutions deepen customer engagement via multi-lingual support, next-best actions, precise recommendations, onboarding process, appointment setting, and scheduling. Decision-makers can accurately decode customer emotions and sentiments, use empathy reasoning, and topic clustering, and summarize content from various channels.
Deploying this technology will level up CSAT through hyper-personalized responses, broader customer reach across languages, faster resolutions, and scalability to meet expanding business needs in the future.
Be Wary of Blind Spots
A survey of IT leaders indicates that 67% are prioritizing generative AI for the next 18 months, and one-third have it as a top priority, but challenges remain. The advancement in AI is still in its nascent stages, with rapid rounds of experimentation and innovation in progress.
A lot of research, validation, and testing iterations are underway.
Leaders need to be wary of the reputational risks and brand damage arising from hallucinated responses (coherent nonsense) amidst sensitive customer interactions—for example, providing a false credit card interest rate to a customer seeking information.
Brands must address biases, data privacy laws, copyright issues, human-verified output, transparency, security audits, and diverse and inclusive representative data sets through an ethical and responsible AI governance framework.
With all the buzz around the new AI arms race, clients need to ask service providers the tough questions around security, accuracy, and governance. These include:
• Are sufficient data guardrails in place?
• What’s the differentiating factor of a particular service offering? Can other vendors also replicate the results with the same API?
• Is this the “real deal” integration with a high-profile generative model or another duplicate?
• Is this LLM trained in specific domain use cases (like telecom, contact center) and possesses topic-centric grounding?
Consider domain-specific large or a small parameter set to train the LLM models for successful outcomes. Such LLM models trained on a small set of parameters would outperform larger models at less cost. Accurate, complete, unified data and enhanced cybersecurity measures are paramount for trustworthy innovation.
David Truog, a principal analyst specializing in technology and design at Forrester Research Inc., stated it’s apt to do some experimentation but too early to deploy mission-critical applications out of this. Simple use cases like self-service would be an ideal starting point to build on. Leaders needn’t rush but assess the situation and explore partners’ expertise before embarking on significant initiatives.
What Should Contact Center Leaders Do?
Customer service and support leaders need to capitalize on this opportunity to extract the maximum value for the contact center. Start the journey by working with the right tech vendor to:
• Conduct an enterprise maturity assessment to see how to integrate generative AI into the current ecosystem.
• Assess current data and integration requirements.
• Ascertain top use cases to see early success.
• Chart a detailed enterprise roadmap involving data integration and technical architecture.
The enterprise-grade generative AI tool stack is comprehensive and customer service leaders need not get bogged down with the enormity of the task. Organizations like Movate offer customized packages to get started.
Here are some key foundational steps in the journey to build a Generative AI roadmap
Define and document a formal enterprise AI policy that covers ethical AI guidelines with audit mechanisms to verify outcomes.
Find the right use cases. Use data analytics from CRM/ITSM systems to narrow down on low-complexity, high-impact use cases. ChatGPT’s information-summarization capability can synthesize distributed customer feedback, NPS & CSAT data to provide meaningful insights.
Evaluate “Build” vs “Buy” decision as it takes a wider team effort to make the journey successful. Given the diversity of technology and growing industry-specific solutions, defining the adoption journey can be daunting. Investing in business differentiation features on top of standard GPT-enabled platforms will be key for proper outcomes. Start using secure GPT-enabled CRM platforms trained on customers’ internal data with knowledge curation and contextual training.
Strive for accuracy and build trust. Most organizations have data integrity problems; the existing knowledge base needs enrichment and curation before deploying it to train the model. Establish a KM governance framework focusing on selection, enrichment, and training process with apt approval workflows and human validation—Finetune first-draft ChatGPT responses for accuracy in subsequent iterations. Managing the algorithmic dial is critical for accurate outcomes.
Define the security governance process with a human-integrated holistic approach. Governance entails architecture, data readiness, model training process, data residency, and authentication techniques. End-to-end encryption, SSO, MFA, ISO & IEC adherence, GDPR & CCPA compliance are table stakes.
The last step is organizational change management. Communicate enterprise adoption plans, create bridging courses, and help employees and partners to align with organizational goals in advance.
A Watershed Moment
The disruption is unlike any other in the last decade or two and now is the time to realize its benefits. Gartner says innovation in AI is accelerating and creating numerous use cases in generative AI across industries. With a combination of cloud and new-age technologies, generative AI is set to open new frontiers to bridge the physical and digital worlds. A pivotal moment in AI poised to reinvent business and revolutionize CX is unfolding.
Though a ton of excitement is in the air, leaders need to communicate the benefits of AI’s new dawn to their support teams; they need to address the market disruption in the new future of work and quell the collective anxiety around the buzzy technology.
Here’s an excerpt from ChatGPT’s response on what the future of CX would be with generative AI:
“…Overall, the future of CX with generative AI holds promise for enhancing personalization, efficiency, and customer satisfaction. As the technology advances, it will be crucial to balance automation and human touch, creating seamless and meaningful interactions between businesses and their customers.”