The India AI Impact Summit 2026 brought together political leaders, technology chiefs and policy-makers at a moment many believe will shape the next chapter of human progress. The mood was forward-looking, even optimistic — but not without caution.
At the heart of the conversation was a powerful idea from N Chandrasekaran, chairman of Tata Sons. He described artificial intelligence (AI) as the next foundational infrastructure — ‘the infrastructure of intelligence’ — placing it in the same league as steam power, electricity and the internet. Yet, he was equally clear about the trade-off. In an age of abundant intelligence, he says, what becomes scarce is trust, stewardship and human capability.
That scarcity is already visible.
For consumers, small businesses and institutions, the rapid spread of generative AI is not just a technological shift; it is reshaping the fraud landscape in real time. While leaders such as Dario Amodei of Anthropic, Sundar Pichai of Google, António Guterres of the United Nations, and French president Emmanuel Macron spoke about inclusion, guardrails and shared prosperity, cybercriminals were listening too. And they are deploying the same tools — at scale.
A recent
YouTube interview between Sam Harris and professor Judea Pearl offers a useful reality check. Prof Pearl, widely regarded as the pioneer of causal reasoning in AI, pushed back against the idea that simply building bigger language models will produce genuine machine understanding. Today’s large language models (LLMs), he explained, do not independently figure out how the world works. They synthesise and compress interpretations that humans have already written and uploaded online.
Take medicine, he says. “These systems are not directly discovering cause-and-effect relationships from raw hospital data. Instead, they are trained on papers and explanations authored by doctors who already understand disease mechanisms.”
Without the ability to move up what Prof Pearl calls the ‘ladder of causation’—from recognising patterns to understanding why something happens—AI remains fundamentally limited.
Why does this matter for consumers? Because fluency is not the same as understanding. An AI-generated answer may sound authoritative, even confident, but that does not mean it grasps the facts or the underlying cause and effect. In high-stakes areas such as health, finance or law, that distinction can make a difference between life and death.
Let us then examine how dubious AI systems are overwhelming digital ecosystems — and how fraudsters are exploiting that surge.
When Intelligence Becomes Industrial-scale Deception
In the past, fraud had a built-in constraint: human effort. Remember the ‘bumper inheritance lottery’ emails from the ‘Nigerian prince’? Writing convincing scam emails, drafting fake legal documents or preparing forged financial notices takes time and skill. This was missing from all the earlier scam communication and anyone would spot the mistakes in these messages. In other words, natural friction limited the scale of scams in old times.
Generative AI has removed that barrier.
One individual can now produce thousands of polished (read: professionally written) emails, legal filings, resumes, research submissions or social media posts in minutes. Systems that once relied on the effort required to produce content are now flooded.
Newsrooms report waves of AI-written opinion pieces. Academic journals are grappling with machine-generated submissions. Lawmakers face automated public comments in bulk.
Recruiters are swamped with synthetically generated resumés. Social media platforms are awash with fabricated reviews and influencer content. Information is everywhere in abundance, but a trace of wisdom seems to be missing from all.
In an
essay published in The Conversation, public-interest technologist and security expert Bruce Schneier and Nathan E Sanders term the flood of generative AI as an arms race. "These are all arms races: rapid, adversarial iteration to apply a common technology to opposing purposes. Many of these arms races have clearly deleterious effects...The fear is that, in the end, fraudulent behaviour enabled by AI will undermine systems and institutions that society relies on."
The problem is that when both attack and defence are automated, ordinary users can get caught in the crossfire.
The Fraud Multiplier Effect
AI does not just increase volume. It sharpens precision.
• Hyper-personalised Phishing:
Fraudsters analyse leaked databases, scrape social media profiles and generate messages that reference real employers, recent purchases or family members. The clumsy scam email of the past has evolved into a polished, context-aware communication that feels unsettlingly personal.
• Voice and Video Deepfakes:
With short audio samples, criminals can clone voices to impersonate executives or relatives. Synthetic videos are being used in investment scams and extortion attempts. Familiarity in voice and video can now be fabricated.
• Fabricated Legal and Financial Documents:
Court orders, tax notices, know-your-customer (KYC) alerts and regulatory warnings can be replicated convincingly — complete with logos formatting and a bureaucratic tone.
• AI-powered Investment Scams:
Chatbots pose as financial advisers, responding instantly and confidently, guiding victims step by step into fraudulent platforms.
Why It Affects You
When digital systems are overwhelmed with synthetic inputs, several risks surface. Human oversight often weakens under pressure from volume. Legitimate users may be wrongly flagged. Authentic communication becomes harder to distinguish from fabrication.
Meanwhile, regulation often trails technological change. Remember old Bollywood films? The police would invariably arrive only after the drama had played out and the damage was already done!
As Mr Guterres has argued, AI governance cannot be left to a handful of actors. Without credible guardrails, power and data concentration deepen systemic risk. And as Mr Pichai from Google warned, if access and literacy gaps widen, the AI divide becomes a security divide — those who understand the threat landscape will be safer than those who do not.
High-risk Red Flags
In this environment, we as consumers need to recalibrate our basic instincts.
A perfectly written but urgent message is no longer reassuring. Hyper-personalised details must prompt verification, not trust. AI chat support inside unfamiliar apps deserves scrutiny. A voice that sounds like a relative asking for emergency funds should be verified independently. Official-looking documents must be cross-checked through legitimate channels.
Remember, fluency is easy, but authenticity is harder.
Practical Safeguards
• Adopt a zero-trust posture. Verify unsolicited communication through official websites or independently sourced contact details.
• Tighten your digital footprint. Limit publicly visible personal information and review privacy settings regularly.
• Enable multi-factor authentication on email, banking, social media and cloud accounts. Use a reputable password manager to generate unique credentials.
• Pause before clicking. Inspect URLs carefully. Avoid downloading unexpected attachments.
• Be cautious with biometric exposure. Clear voice recordings and high-resolution videos can be repurposed for cloning.
• Download applications only from official marketplaces — and scrutinise developer credentials.
• Monitor financial transactions routinely. Early detection improves recovery chances.
• Finally, educate family members. Teenagers and elderly relatives are frequent targets of AI-enabled scams.
Can Institutions Adapt?
There is reason for cautious optimism. AI can strengthen fraud detection by analysing anomalies and monitoring behavioural patterns. It can assist researchers in validating findings more efficiently — provided transparency and human oversight remain central.
As Mr Amodei noted, AI systems are rapidly approaching capabilities that rival or exceed human performance in many tasks. That potential could transform healthcare and education. But unmanaged disruption risks destabilising the very trust-based systems society depends on.
President Macron has warned that nations should not become passive data markets. At a personal level, that means understanding where your data resides and how it is used.
The Real Scarcity: Trust
The AI Summit’s underlying theme was responsibility. When generating convincing content costs almost nothing, content becomes infinite. Trust, by contrast, becomes rare.
Fraud flourishes where verification is difficult and volume is overwhelming. Addressing this reality requires robust regulation, transparent AI standards, institutional accountability and widespread digital literacy.
But for individuals, vigilance remains the first line of defence.
AI may, indeed, become the infrastructure of intelligence. It may unlock extraordinary benefits. Yet, the same tools that accelerate discovery can also automate deception.
In this environment, scepticism is not cynicism. It is discipline.
Stay Alert, Stay Safe!