Fraud Alert: Dark Side of AI
Someone who chose to remain anonymous has uploaded a slightly altered version of a popular song onto a music-sharing site. The song plays just the same to human ears, but cleverly slips past the artificial intelligence (AI) copyright checker while 'creating' the altered version. In another scenario, a competitor bombards an insurance company’s AI-powered quote tool with queries—just enough to reverse-engineer the model and create a replica, saving themselves months of development work. Elsewhere, a generative AI chatbot starts revealing sensitive pieces of training data when prompted in a certain way.
 
This is not science fiction. These are real examples of how attackers are exploiting the weaknesses in AI systems. It also demonstrates why the pioneers of AI have always been concerned about risks.
 
No doubt, AI is transforming our world in amazing ways—streamlining logistics, helping diagnose diseases, making customer support smarter and boosting creativity across industries. However, as technology becomes more powerful, it becomes a bigger target. Unfortunately, criminals are learning to exploit AI just as fast as the machines are learning to become more powerful.
 
AI is not perfect. Like any system, it can be manipulated, attacked, or misused. And as much as it helps us, it also opens up new opportunities for cybercriminals and fraudsters to exploit flaws in its design, training, or implementation.
 
In the circumstances, it is essential for everyone to understand AI and recognise that it has a darker side to it. One must understand how attackers are taking advantage of AI and how it can be manipulated. In fact, there are several well-known methods that attackers are already using to target AI models.
 
Data poisoning is one such method. It involves corrupting an AI model’s training data—essentially teaching it the wrong information. The model might then misclassify data, behave unpredictably, or produce biased or harmful results. This can happen if the training data is taken from unreliable sources or deliberately manipulated.
 
Well-known investor and research analyst Dr Vijay Malik shared his example of how ChatGPT showed the wrong benefits of reinsurance for companies. "Asking ChatGPT about the benefits of reinsurance for companies with an example, it showed the following calculation with obvious errors in the output column 'With 30% insurance'. Upon pointing out the error, it rectified it in subsequent answer. However, please note that if you do not recheck its outputs, you may make erroneous decisions,” he says in a post on X.
 
 
Then there are adversarial attacks, where the inputs fed to an AI system are subtly modified in a way that misleads the model, even though everything looks normal to a human observer. A classic example is tweaking a copyrighted song’s playback speed just enough to evade AI copyright detection, while listeners still recognise the song.
 
Model inversion is another tactic. By analysing how an AI model responds to different queries, attackers can start piecing together the original data it was trained on—sometimes exposing personal or sensitive information.
 
Similarly, model stealing involves reverse-engineering an AI tool by sending repeated queries and copying the outputs. Imagine a competitor cloning a sophisticated quote engine without investing in its development, just by probing it systematically.
 
Another growing concern is prompt injection, where hidden commands are embedded in user inputs to hijack an AI’s behaviour. This can bypass safety filters and make the AI respond in unintended—and sometimes dangerous—ways.
 
We also need to be wary of hallucination exploitation. AI tools are known to 'hallucinate', or invent entirely false information – this is one of the first flaws that most people had noticed on ChatGPT. Fraudsters or criminals can use this flaw to spread misinformation, generate fake content, or craft convincing phishing messages.
 
Finally, backdoor attacks involve hiding malicious triggers in the AI during its training phase. The AI responds incorrectly when a specific input is received, potentially leading to harmful consequences.
 
Despite these risks, AI remains a powerful tool, as long as it is used responsibly. Here are a few suggestions that help reduce exposure to the threats outlined above.
 
1. Choose AI Tools with Care
Not every AI app you find online is trustworthy. Some are fake, designed to install malware or harvest your data. Stick with tools approved by your organisation or sourced from verified, reputable developers.
 
2. Never Share Sensitive Information
Many AI services use your data to train their models. That means anything you input might be stored or analysed later. Avoid submitting confidential client data, intellectual property, or personal details—especially if you do not want them exposed publicly.
 
3. Review Access Permissions
Some AI tools connect with your email, files, or messaging platforms. When setting these up, take time to review access settings. Make sure the tool does not have broader access than it needs, and review permissions regularly.
 
4. Don’t Assume AI Is Right
AI tools are only as good as the data they are trained on. If that data is outdated, biased, or tampered with, the output will reflect that. Always verify AI-generated content, particularly if it is being used to inform decisions or support professional work.
 
5. Stick to Appropriate Use Cases
Use AI to assist, not decide. AI is excellent at assisting with routine tasks or generating content. It is not a substitute for human judgement, especially in areas like law, healthcare or finance. Use it as a tool, not a final decision-maker.
 
6. Use Secure Access and Authentication
Just like any online service, your AI account should be protected with a strong password and multi-factor authentication (MFA). Avoid using public devices and always log out when your work is over.
 
7. Consider Using Anonymous Accounts
Use a separate, anonymised account if you do not need your AI account linked to personal or company details. This helps reduce exposure in the event of a breach.
 
8. Check for Plagiarism
AI tools often repurpose language from existing sources. If you are using AI to write articles, blog posts, or website content, make sure it is original, or at least properly edited and credited.
 
9. Stay Alert to Suspicious Activity
AI can be used to create fake videos, clone voices and craft incredibly convincing phishing scams. Be cautious when consuming or sharing AI-generated content, especially if something does not seem quite right.
 
AI is not going away—in fact, it will only become more central to how we live and work. But like any powerful tool, it comes with its own set of risks. Being aware of those risks—and knowing how to protect yourself—makes all the difference.
 
By staying informed, making smart choices and treating AI with the same caution you would give any sensitive technology, you can enjoy its benefits without falling into its traps.
 
After all, in this rapidly evolving digital world, a little caution goes a long way.
 
Stay Alert, Stay Safe!
 
Comments
Sky-high Promises, Grounded Realities: The Story of Indian Air Travel
Sucheta Dalal, 17 April 2025
Air travel remains unrivalled as the preferred mode of long-distance transportation globally, with 4.7bn (billion) passenger trips logged in 2023, according to the International Air Transport Association (IATA). 
 
Its allure lies...
Fraud Alert: AIs Creating Genuine-looking 'Fake' Aadhaar, PAN Cards!
Yogesh Sapkale, 11 April 2025
In an alarming demonstration of how artificial intelligence (AI) can be exploited, it has emerged that AI tools, including OpenAI's ChatGPT, can be misused to generate photo-realistic images of Aadhaar and permanent account number...
Fraud Alert: Betting App Volcano!
Yogesh Sapkale, 04 April 2025
A few days ago, responding to a message on X, Shamim Akhtar, country head at BFL group (e-com), made a shocking revelation. In his village, he says his nephews, aged 14 and 16, told him to put (invest) Rs49 (in a mobile game/app) and...
Fraud Alert: Dangerous Attachments on WhatsApp, SMS, Emails
Yogesh Sapkale, 28 March 2025
A few days ago, Bhuvaneswari, a retired banker and counsellor at Moneylife Foundation, told us about a message she received on WhatsApp. The message file received from an unknown sender was named as a wedding invitation. However, it...
Array
Free Helpline
Legal Credit
Feedback