Suman Reddy, MD, Pegasystems, India
‘AI models need to be tuned to be more empathetic to customer needs’
If we want humans to trust artificial intelligence (AI), then we need to incorporate empathy, which is at the heart of ethics issues related to AI systems, says Suman Reddy, Managing Director, Pegasystems, India.
As businesses attempt to improve customer service, many use AI to help make customer decisions faster, cheaper and more intelligently. “They are also using chatbots or intelligent virtual assistants to streamline conversations by taking over routine customer queries. However, in their quest to personalise interactions and reduce interaction times, the human touch is being eliminated,” Reddy told Business Line.
Eventually, customer service becomes impersonal, inefficient, and does not adapt to the customer context, he points out.
The concept was borne out by a survey recently conducted by Pegasystems, whereby 70 per cent of respondents said they preferred to speak to a human at the other end of the line, and not a chatbot.
The nature of AI means that although efficient, it sometimes operates in a way that lacks what a human may describe as empathy, or bombards customers with recommendations that may actually be detrimental for customer loyalty.
Customer service decisions
Reddy insists for more complex customer engagement scenarios beyond AI’s reasoning capabilities, organisations must allow humans to take over.
“They can use their natural capabilities of judgement and reasoning to resolve cases. By combining AI with live human customer agents, customers get the benefit of human judgement with AI, allowing the agent to vet AI recommendation before deciding which path to go down,” he adds.
This integrated approach, he continues, could help manage difficult conversations that do not fit a pre-defined response or require a lot of nuanced judgement.
Citing an example, he says a bank’s sales team could be using data analysis and pattern recognition through their AI to boost their quarterly numbers. “While the AI algorithm might be complying with mandated regulations, it could offer a high-interest loan or insurance premium to a family that they can barely afford it. The plan might not be sustainable for them or the bank. In this case, AI will not have the discretion to withhold that offer, given it will be configured to achieve the bank’s objective to maximise profit even if it came at the expense of hurting the customer’s interest (of an over-expensive proposition),” says Reddy.
In such a scenario, there should be a way for AI “to make more empathetic decisions that balance business needs and customer needs so, in the end, everyone wins. A mutually beneficial transaction helps the business’ bottom line and engenders trust from the customer, that will translate into customer loyalty in the long run,” he adds.
AI needs empathy
AI today can already predict the future. Police forces uses it to map crime, doctors use it to predict when a patient is most likely to have a heart attack or stroke, while researchers are even trying to incorporate AI into their experiments to plan for unexpected consequences.
Many decisions require a good forecast, and AI agents are almost always better at forecasting than their human counterparts, goes the general thinking. Yet for all these technological advances, Reddy says consumers still seem to deeply lack confidence in AI predictions.
Recent findings from a global Pega survey found 65 per cent of respondents don’t trust that companies have their best interests at heart.
“If they don’t trust companies, how can they trust the AI that companies use to engage with them,” questions Reddy. The survey showed only 30 per cent of customers felt comfortable with businesses using AI to correspond with them.
“While AI has tremendous advantages, people are skeptical if businesses’ really care and empathise with their particular situations,” he adds.
“Customer service teams…