Fraud Alert: AI Chatbots under Attack!
In today's digital era, artificial intelligence (AI) is transforming how businesses and consumers interact. From chatbots handling customer service inquiries to advanced AI agents facilitating banking transactions, conversational platforms are increasingly integrated into daily life. However, as with any technology, AI-driven systems are attracting the attention of cybercriminals, and thus the risks to consumers are increasing.
 
Last month, US-based cybersecurity and intelligence firm Resecurity identified a posting on the dark web related to the monetisation of stolen data from one of the major AI-powered cloud call centre solutions in the Middle East. "The threat actor gained unauthorised access to the platform's management dashboard, which contains over 10.21mn (million) conversations between consumers, operators, and AI agents (bots)."
 
Chatbots are a fundamental part of conversational AI platforms, designed to simulate human conversations and enhance user experiences. Such components could be interpreted as a subclass of AI agents responsible for orchestrating the communication workflow between the end user and the AI. 
 
On the other hand, conversational AI chatbots can offer personalised tips and recommendations based on user interactions. This capability enhances the user experience by providing tailored responses that meet individual needs. 
 
As expected, chatbots used by AI agents and conversational AI platforms can collect valuable data from user interactions which can be analysed to gain insights into customer preferences and behaviours. 
 
According to Resecurity, financial institutions (FIs) are widely implementing such technologies to accelerate customer support and internal workflows, which may also trigger compliance and supply chain risks. "Many of such services are not fully transparent regarding data protection and data retention in place, operating as a 'black box', and associated risks are not immediately visible. That may explain why major tech companies restrict employee access to similar AI tools, particularly those provided by external sources, due to concerns that these services could take advantage of potentially proprietary data submitted to them." it says.
 
Let us understand these new emerging threats by cybercriminals targeting AI agents and conversational platforms. Cybercriminals exploit AI modules, launch adversarial attacks on AI models, harvest data via AI chatbots, launch deepfake-based attacks, and create large-scale social engineering attacks using AI-driven systems.
 
1. AI Exploitation by Cybercriminals
AI agents such as those used in customer service or virtual assistants like Siri and Alexa are designed to streamline user tasks. However, these systems are increasingly vulnerable to exploitation by cybercriminals. Attackers can deceive consumers into sharing sensitive information or transferring funds by hijacking or manipulating AI agents. As users often trust AI to handle routine tasks, this trust can be exploited for malicious purposes.
 
The best example of this exploitation is AI-powered voice phishing (vishing) in which fraudsters use AI to mimic the voices of legitimate representatives. For instance, they might create a convincing imitation of a bank representative or government official, urging a victim to provide financial information or authorise fraudulent transactions. Since the voice sounds convincingly lifelike, the user is more likely to comply with the attacker's demands.
 
2. Adversarial Attacks on AI Models
The adversarial attack involves feeding malicious input into AI systems to manipulate the decision-making processes. For instance, cybercriminals can intentionally craft specific queries that exploit vulnerabilities in the AI's algorithm, leading it to respond incorrectly or disclose sensitive information. Attackers might use this technique to manipulate AI agents in conversational platforms to bypass security measures or trick them into revealing confidential data.
 
A report from Mozilla found that the recent explosion of romantic AI chatbots is creating a whole new world of privacy concerns. "They can collect a lot of (really) personal information about you… But, that is exactly what they are designed to do! Usually, we draw the line at more data than is needed to perform the service, but how can we measure how much personal data is 'too much' when taking your intimate and personal data is the service?"
 
Instead of the 'old-school' email messaging, conversational AI platforms enable interaction via AI agents that deliver fast responses and provide multi-level navigation across services of interest in near real-time. 
 
However, conversational AI, such as customer service bots or romantic AI chatbots, can be manipulated to extract information like account details, credit card numbers, or other private data by exploiting poorly designed conversational flow or security gaps in the platform. Hackers can craft questions or input data that force the bot to bypass authentication mechanisms or expose sensitive information inadvertently.
 
3. Data Harvesting via AI Chatbots
It is an open secret that AI agents often collect and store vast amounts of data to improve their services. It includes personal details, transactional history and even sensitive conversations. If these systems are compromised, cybercriminals can harvest personal information which they can later use in identity theft, account takeovers, or sophisticated phishing schemes.
 
For example, a poorly secured AI system used by a large retail chain might inadvertently expose customer details to attackers. Once a hacker gains access to the data stored by the chatbot—such as email addresses, payment information, or order history—they can launch targeted phishing campaigns, leading to financial fraud or identity theft.
 
4. Deepfake-based Attacks
Deepfake technology, which uses AI to create hyper-realistic videos or audio-mimicking individuals, has introduced a new layer of risk. Cybercriminals are using deepfakes in combination with AI-powered conversational platforms to manipulate or scam consumers.
 
Imagine receiving a video call from someone who looks and sounds exactly like your boss, asking you to transfer company funds or share confidential documents. Deepfakes can enable such attacks by making them far more believable than traditional phishing attempts, thus increasing the likelihood of success. 
 
5. Social Engineering via AI-driven Systems
AI agents are built to simulate human interaction, but cybercriminals can also use them to execute large-scale social engineering attacks. By leveraging AI chatbots or virtual assistants, attackers can create highly personalised and convincing social engineering schemes that manipulate individuals into providing private information, clicking malicious links, or falling for fraudulent schemes.
 
Cybercriminals deploy AI-driven bots on social media or messaging platforms to initiate a 'conversation'. These bots can simulate a natural person's conversational style, making it harder to detect. The bot can easily pose as a friend or coworker, asking for help or providing links to malicious websites that lead to malware infections or credential theft. 
 
The experts from Resecurity outlined the need for AI trust, risk, and security management (TRiSM), as well as privacy impact assessments (PIAs) to identify and mitigate potential or known impacts that an AI system may have on privacy, as well as increased attention to supply chain cybersecurity. 
 
"Conversational AI platforms have already become a critical element of the modern information technology (IT) supply chain for major enterprises and government agencies. Their protection will require a balance between traditional cybersecurity measures relevant to SaaS (software-as-a-service) and those specialised and tailored to the specifics of AI," the cybersecurity and intelligence firm says. 
 
While the risks associated with cybercriminals targeting AI agents and conversational platforms are significant, consumers can take several steps to protect themselves. 
 
Here are a few suggestions...
  • Stay Sceptical: Always verify the identity of individuals or entities that request sensitive information, especially if the request comes from an AI agent, chatbot, or virtual assistant.
     
  • Enable Multi-factor Authentication (MFA): Whenever possible, activate MFA on your accounts. Even if a cybercriminal obtains your login credentials, MFA adds an additional layer of security.
     
  • Be Cautious with Chatbots: Avoid sharing sensitive information with AI-powered systems unless you are certain the platform is legitimate and secure.
     
  • Monitor Financial Transactions: Monitor bank statements and transaction histories for any unauthorised activity. Remember, cybercriminals, too, are using AI systems to initiate fraudulent transactions without your knowledge.
     
  • Educate Yourself about Deepfakes: As deepfake technology becomes more advanced, learning to recognise warning signs—such as unnatural body movements or discrepancies in voice quality—can help avoid falling victim to a scam.
 
As AI technology continues to evolve, so will cybercriminals' tactics. The integration of AI agents and conversational platforms into everyday life presents both exciting opportunities and serious risks for consumers. 
 
Nonetheless, the onus also lies on developers to create more secure AI-driven platforms, ensuring that consumers can benefit from the technology without unnecessary risk. However, very few developers have the resources (read: money) to pay attention to these vulnerabilities. Most often, developers are found rushing toward the project completion deadline and overlooking the security aspects of chatbots. 
 
By staying informed and practising good cybersecurity hygiene, you can protect yourself from the growing threat posed by cybercriminals targeting chatbots used by AI agents and conversational platforms. 
 
Stay Alert, Stay Safe!
 
Comments
parthi1961
4 weeks ago
Informative .
Fraud Alert: Your Data Is under Siege!
Yogesh Sapkale, 04 October 2024
A few days ago, Santosh Patil, a friend, received a call from a person who said he was calling from a courier company. He was told there was a parcel for him from a well-known jeweller, but he needed to rectify an error in the...
Fraud Alert: Postal Delivery Scams
Yogesh Sapkale, 27 September 2024
Over the past few years, cybercriminals have been duping people in the name of FedEx, a global courier service. They started tracking people who were frequent travellers abroad and then expanded their reach. These criminals have now...
Fraud Alert: WhatsApp Hacking through WhatsApp!
Yogesh Sapkale, 20 September 2024
Two days ago, Nichola, a colleague, informed me that she had received a message from her friend on WhatsApp (WA) asking her to share a six-digit code sent to her mobile via SMS. The friend claimed the code was sent by mistake. Nichola...
Fraud Alert: The Old, Bad Fake Antivirus Scan Scams Are Back
Yogesh Sapkale, 13 September 2024
Before 2010, many users of personal computers (PCs) and laptops complained about pop-ups while using the internet. In later years, with browsers embedding blockers in the software, these pop-ups disappeared. But in the digital space,...
ArrayArray
Free Helpline
Legal Credit
Feedback