Grace, not disgrace: how to polish your customer service AI
Posted: 29 June 2017 | By Charlie Moloney
“When 1,620 consumers were tested under laboratory conditions, 63% said they felt their heart rate increase when they thought about receiving great customer service”, found an American Express Service Study in 2013.
Quality customer service sets peoples’ hearts a-flutter, and can even trigger the same cerebral reaction as feeling loved. Can an artificial intelligence (AI) unit send your customers to this retail Nirvana? Only if you follow these Golden Rules:
Keep it fluent
The China Merchant Bank (CMB) has a chatbot that handles 1.5 to 2 million customer conversations per day, which would require thousands of human operators. Customers at the CMB can pay off credit cards, apply for loans, pay utility bills and have all kinds of queries answered by a bot which is empowered with decades of Q&A data compiled by the CMB.
Big data is one way of making sure that your AI has the brain to comprehend customer queries. The more examples the bot has of successful Q&A sessions, the more likely it will be able to deal with complicated questions, incorrect spelling and grammar, or slang.
Outdoor clothing retailer, North Face have introduced a chatbot which can understand human speech. The AI will ask you basic questions to determine your needs, and can even tell you what the weather will be like on the date of your hike. 50,000 users interacted with the North Face AI during a trial period in January, and 75% said they’d use it again.
“I want a jacket for a weekend in NYC with my little boy”, is a request which caused North Face’s customer service AI to freeze, Venturebeat writer Matt Marshall revealed in March 2016. It turned out that the marginally different: “my little kid” is a phrase that the natural language processing (NLP) system could understand.
“We need to put language at the core of our computing experience”, says Lili Cheng, of Fuse Labs, who told Computer Weekly that users shouldn’t be forced to only use certain words for fear of not being understood by an inarticulate chatbot. “Otherwise we are constantly mapping our brain onto the structure of the computer”, she said.
A customer who used the Hopper system, American Airline’s chatbot, to book her tickets this year, discovered on the day of her flight that the bot had “claimed she had cancelled her ticket”, a clear misunderstanding which ruined her trip, as revealed in a special report to the Washington Post.
You don’t need to have the wittiest, most charming, and eloquent bot there ever was, but a customer will most likely be unhappy using an AI system to complete any complicated or sensitive transactions if they suspect that the bot doesn’t know what they are talking about.
Keep it controlled
It’s essential your customer service AI doesn’t put its fibre optic foot in it by leaping to the wrong conclusion based on its data. A nightmare scenario for companies adopting AI is that their robotic colleagues insult and alienate their customer base.
Studies have shown time and again that AI can adopt human prejudices and biases, and customers won’t have any trust in a bot which reveals itself to be bigoted. Let’s not forget the recent demise of Microsoft’s Tay chatbot.
On Reuters.com, a search for a name considered typical of a black person was 25% more likely to produce a search page featuring a targeted advert containing the words ‘arrest’ or ‘arrested’, according to a 2013 study from Harvard University.
This example should warn companies against deploying their AI in areas where it might overstep its mark and make a decision on something which it doesn’t really understand.
Call centres now use AI to help human operatives, using behavioural science to automatically detect when customers are upset or tense and any mistakes the operative might make, such as speaking too quickly or interrupting. But sometimes, interrupting is a good thing.
New Yorkers famously speak over each other routinely, “Interrupting can thus be likeable and build rapport with [New Yorkers]. But the same behaviour with some other callers could be seen as rude”, pointed out professor Rosalind Picard in the MIT technology review.
The idea of humans and AI working together in customer service is a good one, but it may be better for the AI to defer to the human operator, rather than the other way around, no matter how much data or behavioural science theory the AI has been equipped with.
A safe bet is a chatbot which will field questions to a human user if it is less than 90% sure of the answer. The chatbot can then suggest three possible answers to a customer query which it thinks might be right, and let the human operator make the final call.
A safe bet is a chatbot which will field questions to a human user if it is less than 90% sure of the answer
This is a middle ground, and although businesses won’t make those big savings by going completely automated, they’ll keep faith with their customers rather than unwittingly insulting them.
Keep it transparent
“If I am looking for the nearest Starbucks, who cares if Siri knows where I am standing?” says Ali Lange, a policy analyst at the Center for Democracy and Technology, as quoted by the MIT Technology Review.
Lange’s view is completely subjective, and it’s a recipe for unmitigated disaster if a business wantonly assumes that customers will be happy with their information and data being accessed so long as they get a useful product.
Data is often essential to AI. It’s now the case that, given enough data, machine intelligence can be better at judging customers’ credit scores than humans, which could make thousands more people eligible for loans, who currently are considered likely to default.
It’s “creepy” to stalk your customers on Facebook, or anywhere else, in the view of Douglas Merrill, former Google CIO
The search engine Baidu, in China, which has a small lending business, managed to approve 150% more borrowers, with no increased losses on their loans, after adopting an AI software which analysed data from borrowers’ online search behavior, mobile wallets, and other sources.
However, where does a company draw the line? What about investigating their customers’ social media accounts? A lending businesses could do well to look at a potential borrower’s LinkedIn, where it should be impossible to lie about your employment.
It’s “creepy” to stalk your customers on Facebook, or anywhere else, in the view of Douglas Merrill, former Google CIO. Frantically data mining at all costs might develop your AI, but any actions taken in secret risk causing a negative backlash against your brand.
Academics from the universities of Michigan, Illinois, and California State reported that Facebook users expressed “dissatisfaction and even anger” when they learned that their News Feeds are being tailored by an algorithm, which hides certain posts without their knowledge.
So, don’t deploy your customer service AI in an underhanded way, train it to behave itself and show due restraint, and make sure it’s a smooth talker. All going well, your customers’ heart-rates will be off the scales whenever they use your system. That’s amore!