Last month, a 70-year-old senior journalist from Bengaluru received a call on WhatsApp informing her about a parcel being sent from Mumbai to Taiwan in her name that contained 240 grams of MDMA (drug), credit cards and passports. When she denied it, the caller said that her Aadhaar was being used for sending the parcel and a case had been registered against her. "She was under 'digital arrest' by fake cops from 15th December to 23 December 2023 and lost Rs1.2 crore to online fraudsters. The money had been transferred to multiple bank accounts in several north Indian states, including Bihar, the neighbouring state of Kerala, and Dubai," says
a report from Times of India.
Courier or FedEx scams are just one of the many frauds in which thousands of gullible people lose money daily. And, in most cyber-frauds, the chances of receiving the looted money are minuscule. Unfortunately, there is no respite from increasing online fraud. In fact, in 2024, artificial intelligence (AI)-assisted threats will become more sophisticated and, thus, more complex or difficult to spot.
Technology is a double-edged sword. At the same time, technology is also a great leveller. It does not differentiate between end-users, whether you are highly educated or illiterate, among the world's wealthiest people or someone struggling to make ends meet in daily life. In cyberspace, especially in crime incidents, what differentiates between the victim and non-victims is the education, alertness and knowledge about dealing with calls and messages from unknown callers or fraudsters.
With rapid advances in technology, we are witnessing a rampage of tools and applications (software) which are becoming much smarter through the use of AI, machine learning (ML) and large language models (LLMs) like OpenAI's ChatGPT and Google's Bard. These new-age tools allow fraudsters and cybercriminals to easily incorporate hyper-realistic digital falsification.
According to Gen, a global company with a family of trusted consumer brands in cyber safety, as AI becomes more embedded in our daily routines, its impact extends beyond mere technological innovation, influencing societal norms, privacy considerations and ethical boundaries.
"2024 marks a pivotal shift towards more compact LLMs that function directly on user devices. Additionally, 2024 will be significant for generative AI, particularly in multi-type conversions. The evolving LLMs are not just limited to text generation; they are branching into more dynamic forms of media conversion. However, it will also be misused for the creation and spread of scams and disinformation, as it will be progressively harder to recognise a truly recorded video from an AI-generated one. Evolving AI technologies raise questions about ethics, regulation and balancing innovation with user welfare," Gen says.
In 2024, it sees social media becoming a major attack vector for AI-related scams and disinformation. AI would also be used to compromise business emails, Gen says. "Digital blackmail will evolve and become more targeted. Ransomware will become more complex and damaging. We also expect evolving ransomware delivery methods, including more sophisticated VPN infrastructure exploitation."
According to ConsumerAffairs, financial 'mobile abuse' is pegged by many tech experts to grow exponentially in 2024. "Right now, financial service smishing – a type of phishing attack that uses text messages (SMS) to deceive individuals into revealing personal information or clicking on malicious links – is in third place behind business/brand impersonators and delivery service message scams, but that could change."
"Smishers send messages designed to pique the recipient's interest or concern. Another new twist is that scammers are pretending that they are actually trying to protect the target from a scam," it says.
There are a few more scams or cheating modules that will continue to be used by fraudsters in 2024. It includes instant loan apps, fake chat apps, phishing, cryptocurrency exchanges, and deepfakes, to name a few.
There is a growing concern among lenders about the rise in malicious practices of using fake instant loan apps. More instant loan apps are stealing data from delinquent borrowers and rogue apps misuse user data to pressure repayments and add high late fees. Last month, the
Indian government informed Parliament that between April 2021 and July 2022, Google suspended or removed over 2,500 fraudulent loan apps from its Play Store. While Google removed fraud and fake instant loan apps that were already there, every day hundreds of new apps are being added on Play Store, making it difficult for common users to understand and stay away from such fraud.
Gen sees a rise in fake chat apps with hidden crypto-stealing features or spyware. "These fraudulent apps may abuse the trust users have in communication platforms, penetrating devices to extract confidential information or cryptocurrency keys. These attacks could include advanced social engineering methods, persuading users to give extensive permissions under the pretence of adding chat features that are not present in standard chat clients."
Last week, cricketer
Sachin Tendulkar became the latest celebrity to fall victim to a deepfake video. The video shows Mr Tendulkar promoting an app called 'Skyward Aviator Quest', claiming that his daughter Sara Tendulkar is making good money by playing on this application. Refuting the deepfake video, he wrote on X, "These videos are fake. It is disturbing to see rampant misuse of technology. Request everyone to report videos, ads & apps like these in large numbers. Social Media platforms need to be alert and responsive to complaints. Swift action from their end is crucial to stopping the spread of misinformation and deepfakes."
Avast expects 'malware as a service' (MaaS) to become increasingly sophisticated in 2024. "Platforms like Lumma already offer malware subscriptions with advanced features, allowing even newbie cybercriminals to launch complex attacks. This lowers the barrier to entry for cybercrime, letting more attackers participate in malicious activities. The profits from these activities are then reinvested into making the malware better, creating a self-perpetuating cycle of stronger and stronger cyber attacks."
While the misuse of new-age tools like AI by cybercriminals presents a significant challenge, we, as common users, can leverage AI tools for protection. However, staying informed about new threats and remaining cautious and proactive in our approach while dealing with online frauds will only protect us in 2024 and beyond.
Stay Alert, Educated and Stay Safe!
How To Report Cyber Fraud?
Do report cybercrimes to the National Cyber Crime Reporting Portal http://cybercrime.gov.in or call the toll-free National Helpline number, 1930. To follow on social media: Twitter (@Cyberdost), Facebook (CyberDostI4C), Instagram (cyberdostl4C), Telegram (cyberdosti4c).
If the fraud is related to your bank account, you need to immediately send an email to the official email ID of your branch (you can find it on the bank's website or your passbook) with a copy to the bank's customer care. Even if you have called the official number for customer care, you must still send an email describing your conversation with the bank executive, along with the time, date, and duration of the call. This will be helpful if you face a liability issue with the bank.