AI in Customer Service: Frustration with Glitchy Automation and Data Capture
The customer is king—a much quoted business mantra coined by Harry Gordon Selfridge a century ago—is more of a hollow slogan today. Instead, customers of giant organisations are forced deal with ‘systems’ that work at avoiding a human interface. In the age of digital automation, where artificial intelligence (AI) has the power to process massive amounts of data in seconds, we find ourselves battling glitchy algorithms instead of receiving the royal treatment.
 
AI was meant to improve efficiency, reduce costs and deliver quicker resolutions; and it certainly work well for routine transactions, but is it also turning into a tool to for companies to avoid customer issues? 
 
One might laugh at early AI mishaps such as Alexa ordering expensive dollhouses and cookies (The 10 Biggest AI Customer Service Fails (So Far!)) based on a child’s conversation, or ChatGPT scarily fabricating court cases (known as hallucination)—but those errors were assumed to be the growing pains of a new technology. The concern today is deeper: it is about the widespread use and institutionalisation of AI-driven customer service without adequate safeguards, escalation channels, or human oversight.
 
When it works, AI certainly impresses. This is mainly for routine functions such as tracking orders, booking appointments, checking status or answering FAQs. But it quickly crumbles when presented with non-standard requests that require a nuanced or customised response or escalation and can turn into a frustrating barrier to basic service. This includes providing pre-scripted and irrelevant responses, or trapping customers in a maddening feedback loop with absolutely no recourse to human support.
 
Moving house—already a logistical nightmare—can become a serious ordeal when every company depends on rigid AI systems for delivery, dismantling, reinstallation or servicing. 
 
Even your neighbourhood salon has now started sending you a WhatsApp message seeking feedback. But booking an appointment via the same route is like a mini-exam requiring you to answer endless details such as state, city, location of their outlet and tick on a drop down menu of services. How is this use of AI an improvement? 
 
This appears to be a global problem. AI can facilitate as well as alienate. A Zendex survey report (What Customers Expect from AI in Customer Service) showed that, although people are open to interacting with AI chatbots, 55% found that chatbots ask too many repetitive questions and 47% found that they did not get accurate answers at the end of it. Let’s look at some Indian examples.
 
Eureka Forbes has linked all its customer service to a registered mobile number. But when a technical glitch occurs and the system fails to recognise the number, the customer finds herself in a never ending loop of polite emails that refused to recognise the issue or escalate it to a human interface. 
 
Reliance Digital’s chatbot spammed a customer with delivery updates about a faulty product that had already been picked up and refunded. It also triggered the call-centre for a status check. It required a call to the store-head to, finally, stop the spamming. It also has an app that is supposed to offer an end-to-end solution, but it is not designed for a proper follow up, especially for installation and in-warranty services.
 
Another issue today is the pressure to download company apps to make payments, track delivery or book service requests. A customer has no idea of data is being accessed by the app; but a refusal to use it would mean giving up on easy access to certain services. Since there are serious concerns about how companies are automating, processing, storing and using our data which includes contact, address and payment details. Unless the ministry of consumer affairs (MCA) formulates rules to protect consumers, we remain a source of endless data for AI learning. 
 
Even global companies, like IKEA, are notorious for making access difficult. When you, finally, get a human interface, there is another problem—poorly trained call-centre assistants who repeatedly misinterpret and issue and appear to have no access to information that ought to be available on its system. And yet, every interaction begins with the standard disclaimer that “this call may be recorded for training purposes’. One can only hope that an AI-based analysis of such calls lead to future improvement. 
 
In one extreme global example, a technical error after an update, allowed a customer to use the chatbot’s AI learning ability to embarrass DPD, a parcel delivery company, in January 2024. Ashley Beauchamp, a pianist and conductor persuaded the chatbot to swear at him, criticise DPD and even suggest better delivery options and write a poem on how bad DPD’s bot was. His post on X, which went viral, said: "It's utterly useless at answering any queries, and when asked, it happily produced a poem about how terrible they are as a company."  The post got millions of views before the bot was suspended by DPD.
 
What makes such failures all the more galling is that companies certainly know better. They understand that AI-based systems improve only when they are fed large quantities of high-quality, real-world data, ideally from actual human interactions. HubSpot reports how a company could compress the task of processing 10,000 reviews from a few weeks to a few hours, using AI. Another used AI tools to collect and categorise data from multiple channels to understand the context and sentiment behind comments. If customer feedback is key to improving sales and service, why annoy customers with half-baked AI tools?
 
Companies seem to opt for inexpensive AI plug-ins instead of investing in hybrid models where human interaction and call-centres augment the gaps in AI performance. At the same time, they bombard customers with standard, checkbox-based feedback requests at every step of their service delivery in the hope of gathering more data. Is there any point in such feedback without nuanced information and examples that would lead to AI learning and improved systems?
 
Bad AI is a serious issue. A blog on univo.com says “up to 85% of AI projects fail (The Complex World of AI Failures / When Artificial Intelligence Goes Terribly Wrong) due to poor data quality”. Using flawed, incomplete, or biased datasets leads to unreliable outputs and this cannot change without human interaction. Experts also observe (sobot.in) that bad AI not only increases frustration but also lowers trust in the organisation.
 
Clearly, AI is a tool, but not a magic wand. Without better training data, robust oversight and a willingness to invest in human backup, it can become a symbol of corporate indifference. 
 
Comments
gopalakrishnan.tv
3 weeks ago
Very well written and the absence of human interface in the highly digitalised atmosphere , Customers and their real issues get not only ignored but they get harassed and are made to suffer physically , financially and mentally . My recent experience is that a clerical error crept in in entering the date has resulted in rejecting my entitled medical claims and I am made to run around to get the issue sorted out wasting time , energy and undergoing mental strain . The need for human intervention is paramount in dealing with customer oriented services without which there is no great future neither for the businesses nor for the customers . Customer is a king is a forgotten slogan and technology rules the roost and kills all human related sensitivities is the ground reality . The humane touch which is the foundation of any progress of any nation or business or anything that matters is unfortunately missing these days is very sad and pathetic .
milindnadkarni
3 weeks ago
The year is 2025, and most leading banks have widely adopted automation in their operations. However, this shift has not been without its drawbacks. Customers often find themselves frustrated by persistent errors in these automated systems. To have such issues acknowledged, they are required to go through a tedious process—first proving the existence of the error by submitting screenshots, transaction videos, and detailed explanations. Only then does the issue get logged into what is often a rudimentary and outdated Complaint Management System, which offers little to no visibility into the status or progress of their service requests.

The frequency of these system errors is so high that identifying, documenting, and following up on them can take several hours. Resolutions are rarely prompt; in many cases, customers wait two to three weeks only to receive vague and convoluted explanations that require further time and effort to interpret, often prompting additional back-and-forth communication with the bank.

To make matters worse, banks typically do not provide customers with a clear escalation path. When dissatisfaction persists, customers are merely advised to lodge complaints with the Reserve Bank of India (RBI)—a process that itself offers no assurance of a faster or more satisfactory resolution. Alarmingly, this situation is not limited to small or lesser-known banks, but is increasingly evident even among the most prominent Banks.
saurabh.khanna
3 weeks ago
Leave aside AI, even Natural Intelligence fails to provide the desired customer service.
.its well said that- common sense is very uncommon nowadays. Focus only on sales numbers and margin,will alwys results into increase in number of unhappy and harassed customers
saurabh.khanna
Replied to saurabh.khanna comment 3 weeks ago
Apt presentation of harassment of customer in this video
https://www.facebook.com/share/v/1939tgRe5A/
milindnadkarni
Replied to saurabh.khanna comment 3 weeks ago
I agree with views expressed by saurabh.khanna 100%
Sky-high Promises, Grounded Realities: The Story of Indian Air Travel
Sucheta Dalal, 17 April 2025
Air travel remains unrivalled as the preferred mode of long-distance transportation globally, with 4.7bn (billion) passenger trips logged in 2023, according to the International Air Transport Association (IATA). 
 
Its allure lies...
Fraud Alert: Dark Side of AI
Yogesh Sapkale, 17 April 2025
Someone who chose to remain anonymous has uploaded a slightly altered version of a popular song onto a music-sharing site. The song plays just the same to human ears, but cleverly slips past the artificial intelligence (AI) copyright...
Fraud Alert: AIs Creating Genuine-looking 'Fake' Aadhaar, PAN Cards!
Yogesh Sapkale, 11 April 2025
In an alarming demonstration of how artificial intelligence (AI) can be exploited, it has emerged that AI tools, including OpenAI's ChatGPT, can be misused to generate photo-realistic images of Aadhaar and permanent account number...
Fraud Alert: Betting App Volcano!
Yogesh Sapkale, 04 April 2025
A few days ago, responding to a message on X, Shamim Akhtar, country head at BFL group (e-com), made a shocking revelation. In his village, he says his nephews, aged 14 and 16, told him to put (invest) Rs49 (in a mobile game/app) and...
Free Helpline
Legal Credit
Feedback