Fraud Alert: Are You Ready for the Deepfake Tsunami in 2024?
Ratan Tata and NR Narayana Murthy are two well-respected business leaders. One thing common to them is that they do not offer investment advice. So, when videos surfaced where they were heard recommending a certain investment scheme or the use of a particular investing platform, it created ripples across the country. At first sight, the video clips looked credible, but it soon emerged that they were deepfake videos created to misuse the credibility and reputation of the two corporate honchos. 
Two new deepfake videos of the Infosys founder, shared on social media, purportedly promoted an investing platform, 'Quantum AI' that would allow users of this new tech to earn US$3,000 (around Rs2.5 lakh) on the first working day. 
One Sona Agrawal uploaded a fake video on Instagram portraying an interview with Mr Tata. The video endorsed certain investment opportunities, falsely claiming Mr Tata's involvement with the venture. There have also been some social media messages with false claims about Mr Tata's investment in cryptocurrency. This had prompted him to clarify that he has no association with cryptocurrency ventures in any form
Both Mr Murthy and Mr Tata went public, requesting people not to believe the deepfakes. What is worrying is that they are not the only victims of deepfake videos circulated on social media. 
Deepfakes pose a significant threat to everyone's privacy and individual rights and also enable the use of technology to defraud people. The cases of actors Rashmika Mandanna, Katrina Kaif and Kajol are a few examples of convincing fake videos that can be created to destroy reputations, defraud people or even incite legal action.
In Rashmika Mandanna's case, the deepfake video morphed her face into another woman wearing a revealing black outfit. In Katrina Kaif's case, an image of the actress from her recently released film 'Tiger 3' was altered. While the film had the actress engaging in a fight with a stuntwoman clad in a towel, the edited version showed her wearing a low-cut white top and a matching bottom instead of the towel.
In the case of Kajol, a video of social media influencer Rosie Breen's face was replaced with that of Kajol. The clip showed her changing clothes on camera. For a split second, the manipulated video features the face of the original woman, allowing the fake to be detected.
Deepfakes first emerged on the scene in 2019 with fake videos of Meta chief Mark Zuckerberg and former US House speaker Nancy Pelosi circulated. They are the 21st century's alternative to Photoshopping—creating images and videos of celebrities via a form of artificial intelligence (AI) called deep learning.
Advances in AI, machine learning (ML) and large language models (LLMs) like OpenAI's ChatGPT allow fraudsters and cybercriminals to incorporate hyper-realistic digital falsification. Thus, the near-real image or audio clip deepfake can potentially damage reputations, exploit people, sabotage elections, spread large-scale misinformation, fabricate evidence and, in general, undermine trust.
According to the information security company CyberArk, deepfakes will pose a looming threat to India's cybersecurity in 2024. These attacks will target individuals, businesses and even government institutions, aiming to spread misinformation, manipulate public opinion, and disrupt critical infrastructure.
"As AI becomes more pervasive, adversaries will quickly capitalise on its capabilities, crafting new attack vectors that exploit vulnerabilities in novel ways," researchers from CyberArk say.
The financial repercussions of deepfake attacks could be severe, potentially leading to reputational damage, loss of investor confidence and even economic instability.
In short, deepfakes, synthetic media generated through AI, ML, and LLM techniques, present numerous challenges and potential risks. 
At present, there aren't adequate laws in India to control deepfakes. In 2018 and 2021, the Indian government issued guidelines for using AI titled 'National Strategy for AI' and 'Responsible AI Guidelines, respectively. But, these guidelines are not mandatory and, therefore, not enforceable as law. 
Earlier this week, the Union government advised all social media intermediaries to ensure compliance with the existing IT rules and specifically target the growing concerns around misinformation powered by AI—deepfakes. As per the advisory, intermediaries are obliged to ensure reasonable efforts to prevent users from hosting, displaying, uploading, modifying, publishing, transmitting, storing, updating, or sharing any information related to the 11 listed user harms or content prohibited on digital platforms.
While law enforcement can help in investigating and apprehending cybercriminals, for the common person, it is essential to understand how generative AI, ML, and LLM technologies work.
Before learning how to handle deepfakes, let us first understand the critical challenges of the new technologies.
Challenges Posed by Deepfakes:
Misinformation and Manipulation: Deepfakes can create realistic videos or audio recordings that manipulate and spread false information.
Identity Theft: Cybercriminals can use deepfakes to impersonate individuals, potentially causing harm to personal and professional reputations.
Privacy Concerns: Deepfakes can be created using existing personal photos or videos, leading to privacy violations.
Social Engineering Attacks: Cyber attackers can use deepfakes to deceive individuals into revealing sensitive information or performing malicious actions.
Here are some suggestions to safeguard against deepfake frauds...
Stay Informed: Be aware of the existence and potential impact of deepfakes. Stay informed about the latest technologies and developments.
Verify Sources: Double-check the authenticity of information and media received from unfamiliar or untrusted sources. Verify with multiple reliable sources before believing or sharing content.
Use Critical Thinking: Question the authenticity of content that seems suspicious or too good to be true. Look for inconsistencies in the narrative or visual elements.
Check Metadata: Examine the metadata of media files for inconsistencies or signs of manipulation. Original files typically have metadata that aligns with the context of the content. To check the metadata of an image or file, you need to check its properties (details) in file explorer or files (iPhone). 
Use Robust Passwords/MFA: Strengthen your online security by using strong, unique passwords, enabling multi-factor authentication (MFA), and regularly updating apps or software and operating systems (OS).
Educate yourself: Learn about deepfake detection tools and techniques. There are emerging technologies and platforms designed to identify manipulated media. For example, the Detect Fakes ( website by MIT can help ordinary people understand or identify the difference between a video manipulated by AI and a normal, non-altered video. Detect Fakes is a research project designed to answer these questions and identify techniques to counteract AI-generated misinformation.
Be cautious of unusual requests: If you receive unexpected or unusual requests for information or actions, especially via electronic communication (primarily through mobiles), verify the authenticity through alternate means. Use another device or mobile to speak directly with the person in the video or audio clip to check authenticity. 
Don't share personal information: Minimise the personal information you share on social media and other online platforms. Limit the accessibility of personal data that could be used to create deepfakes.
Report suspicious activity: Report any suspected deepfakes or fraudulent activities to relevant authorities, like the cybercell of local police or government cyber security platforms (for example: National Cyber Crime Reporting Portal ). Many social media platforms also have mechanisms for reporting suspicious content.
Do remember that the technology behind deepfakes is constantly evolving and new use cases may emerge over time. Deepfake frauds can have serious consequences, ranging from financial losses to reputational damage and political manipulation. 
By combining awareness, critical thinking and technological precautions, individuals can reduce their vulnerability to deepfake frauds and protect themselves from potential harm, including financial loss.
Wish you a very happy and safe New Year!
How To Report Cyber Fraud?
Do report cybercrimes to the National Cyber Crime Reporting Portal or call the toll-free National Helpline number, 1930. To follow on social media: Twitter (@Cyberdost), Facebook (CyberDostI4C), Instagram (cyberdostl4C), Telegram (cyberdosti4c). 
If the fraud is related to your bank account, you need to immediately send an email to the official email ID of your branch (you can find it on the bank's website or your passbook) with a copy to the bank's customer care. Even if you have called the official number for customer care, you must still send an email describing your conversation with the bank executive, along with the time, date, and duration of the call. This will be helpful if you face a liability issue with the bank.
Free Helpline
Legal Credit