Many of us receive phone calls from unknown numbers, only to realise it is a scammer pretending to be a bank official, a digital wallet company, or an e-commerce platform. What has changed in recent years is how convincing these calls have become. The language used is smooth, polished and sounds highly professional. But no, these fraudsters have not suddenly earned degrees in communication. Instead, they are using artificial intelligence (AI), machine learning (ML) and large language models (LLMs) to generate smart, personalised scripts—often adjusting their words in real-time.
In many cases, cybercriminals pose as representatives from well-known Indian e-wallets and fin-tech companies like Paytm, PhonePe, or Razorpay. They use AI-generated scripts that sound highly convincing and professional. These scripts are often tailored in real time based on how the user responds, making the conversation seem even more authentic. Once the victim is engaged, the scammers typically ask them to install screen-sharing apps or reveal one-time passwords (OTPs), ultimately, leading to financial fraud and loss.
AI tools are no longer exclusive to big tech companies—they are now being used by cybercriminals to carry out fraud that is faster, smarter and more personalised. People across India, regardless of age or how tech-savvy they are, have been tricked by these highly convincing and well-crafted scams.
Earlier, it was easier to recognise a scam email—poor grammar, strange wording and vague, generic messages were clear warning signs. But today’s cybercriminals have upgraded their tactics. Using AI, LLMs and ML tools, they can now craft emails that are polished, personalised and highly convincing. These modern scams are not only harder to detect—they are also far more successful.
Today, scammers use AI to craft flawless emails and messages that look exactly like they are from trusted organisations. They replicate official websites, design realistic logos and write convincing content that closely matches genuine communication. These scams are not just more polished—they are carefully customised to trick you, making them far harder to detect.
AI Has Changed the Game… for Scammers
Scams, today, are not random or careless—they are targeted, calculated and alarmingly personal. With the help of AI, cybercriminals can sift through huge amounts of leaked data—often gathered from past data breaches—and customise their attacks for everyone.
Details like your full name, job title, city, online purchases, or even recent conversations may already be floating around on the dark web. Cybercriminals buy this information and then use AI tools to create messages that seem surprisingly genuine, making it much easier to trick you.
For instance, a fake email pretending to be from your bank might now include your full name, mention a recent transaction, or match the tone and style of actual messages you have received before—making it harder to detect as a scam.
In some cases, fraudsters are even using AI-powered voice-cloning tools to mimic the voices of family members or senior company executives during phone calls. This makes their requests sound authentic and urgent, tricking victims into revealing sensitive details or approving fraudulent transfers.
Last year, one of India’s most shocking tech-driven scams unfolded in Gurugram, when a senior executive received a call that appeared to come directly from his company’s US-based chief executive officer (CEO). The voice on the call sounded exactly like his boss—clear, confident and familiar. But it was a fake. Using AI-powered voice cloning technology, cybercriminals had replicated the CEO’s voice to perfection. Believing it was a genuine request, the executive transferred over Rs1 crore to a supposed ‘vendor’—only to later discover he had been duped.
AI can now generate thousands of scam messages within seconds, making it easier for cybercriminals to launch large-scale attacks through email, SMS, social media and messaging apps. Many scammers even create fake profiles using AI-generated photos and convincing backstories on platforms like LinkedIn or WhatsApp. These are, often, used to trick people with bogus job offers or lure them into investment scams that promise high returns but end in financial loss.
Using AI-generated videos and manipulated audio, fraudsters produce clips of celebrities or financial experts seemingly recommending a bogus investment app or scheme. In 2023,
Infosys founder Narayana Murthy's two new deepfake videos, which were being shared on social media, purportedly promoted a so-called investing platform 'Quantum AI', claiming that the user of this new tech would be able to earn US$3,000 (around Rs2.5 lakh) on the first working day.
There has been a noticeable surge in AI-generated fake job offers and fraudulent interactions with supposed human resources (HR) professionals, especially on platforms like LinkedIn and WhatsApp. These scams are highly convincing—scammers use company logos, official-looking email formats and even copy real HR profiles from genuine firms to appear authentic. Job-seekers are lured with promises of high-paying roles and smooth recruitment processes. Once trust is established, the victims are asked to pay a ‘security deposit’ or ‘document processing fee’. Many end up losing thousands of rupees before realising the job was never real.
Cybercriminals are also using AI-generated photos and chat responses to create highly realistic fake profiles on platforms like Tinder, Bumble, Instagram and even matrimonial sites. These profiles are designed to appear genuine and trustworthy. Once they win the victim’s trust, the cybercriminals strike—either by emotionally manipulating them into sending money or by luring them into intimate video calls. These calls are secretly recorded and later used for sextortion, where victims are blackmailed with threats to leak the footage unless a ransom is paid.
Why AI-powered Scams Are So Dangerous
Scams have evolved and AI is the driving force behind this new wave of cybercrime. What once relied on poorly written messages and clumsy impersonation is now being replaced by precision-crafted deception.
Here is what makes AI-driven scams so effective—and dangerous:
• Perfectly written messages: AI tools eliminate the spelling errors and awkward grammar that once helped people spot a scam instantly.
• Massive scale: Fraudsters can now generate thousands of unique phishing messages in just minutes, flooding inboxes, direct messages (DMs) and chats with realistic content.
• Hyper-personalisation: With access to stolen personal data, scammers can tailor messages to match your name, job, location—even your recent online activity—making them far more believable.
• Real-time manipulation: AI chatbots can mimic human conversation and respond instantly to your replies, nudging you to take urgent action without suspicion.
• Fake voices and faces: Deepfake technology and voice cloning allow criminals to impersonate family members, colleagues, or even company executives, with uncanny accuracy.
These tools combine to create scams that can deceive even the most cautious, tech-savvy users. In today’s AI-driven threat landscape, traditional advice like 'don’t click suspicious links' is no longer enough. Staying safe now requires awareness, verification, and a healthy dose of scepticism—even when everything seems ‘perfect’.
According to
Avast's Q1 2025 Threat Report, data breaches have skyrocketed, up more than 186% just in the first three months of 2025. "With sensitive information like emails, passwords, and credit card numbers flooding the dark web, scammers are using AI to launch hyper-targeted phishing attacks that are hard to detect with the naked eye," the report warns.
How Can a Common User Stay Safe from AI-driven Scams?
As cybercriminals become more sophisticated, it is important for all of us to adapt and stay alert. AI-based scams are fast, convincing, and increasingly personalised—but there are ways to protect yourself.
Here Are Simple, but Effective, Steps You Can Follow:
Don’t fall for urgency or fear tactics
Scammers, often, pressure you to act immediately—by saying your account will be blocked or you have won a prize. Take a moment to pause and think before clicking a link or sending money.
Verify through trusted sources
If you receive a suspicious call, message, or email—even from someone you know—don’t trust it blindly. Contact the person or company directly using a number or email you already trust, not what is given in the message.
Look out for subtle red flags
AI-generated scams often closely mimic real emails or websites, but small clues can give them away—like an extra letter in the domain name, or using '.co' instead of '.com'.
Use multi-factor authentication (MFA)
Enable MFA on your email, banking and social media accounts. This adds a second layer of security, making it harder for attackers to gain access even if they have your password.
Keep an eye on your accounts
Regularly review your bank statements, app login history and credit reports. If something looks suspicious, report it immediately.
Avoid clicking on random links
If you get an unexpected message with a link—even if it looks legit—don’t click. Go to the official website or open the app directly instead.
Keep your device software updated
Update your phone, computer, apps and antivirus software regularly. These updates fix known security flaws that scammers often exploit.
Share less on social media
Details like your birthday, job title, or phone number can be used to personalise scams. Limit what you make public online.
Report any scam attempts
If you suspect a scam, report it at
https://cybercrime.gov.in or call the national cybercrime helpline at 1930. Your report could help prevent others from being targeted.
AI, ML, and LLMs are powering a new generation of scams—smarter, faster and harder to detect. With technology that can mimic natural language, clone voices, replicate identities and exploit human psychology, cybercriminals are no longer just opportunists—they are highly equipped and dangerously efficient.
But the most powerful defence still lies with you: awareness. By understanding how these scams work and staying vigilant, you can protect yourself and others.
In a world where even trust can be artificially manufactured, remember this: if something feels too perfect, it probably isn’t. When in doubt, don’t trust—verify.
Stay Alert, Stay Safe!