Fraud Alert: DeepSeek AI Disruptions
Everything was going smoothly in the world of artificial intelligence (AI), with established players making small bits of progress every day. AI models were being developed or integrated by device-makers and service-providers and a growing number of end-users (especially smartphone users) were beginning to be touched by AI. Then, there was a sudden earthquake in the AI and tech world. DeepSeek, based in Hangzhou (China), announced the launch of its ultra-low-cost but highly advanced AI model R1, causing havoc in global stock markets and a massive flutter in the tech world. 
 
DeepSeek R1 is garnering worldwide attention, not just for its capabilities in mathematical reasoning, coding and natural language understanding and for performing at a comparable level to leading models like OpenAI's o1, but doing it all at a dirt-cheap cost. Moreover, unlike several existing AI models, like ChatGPT and others, DeepSeek R1 is an open-source model that is thrown open to developers worldwide to access and build upon its architecture. 
 
Big players like OpenAI, Google DeepMind and Anthropic rely on proprietary models to maintain their competitive edge.
 
DeepSeek R1's open-source approach could challenge this model by allowing businesses and researchers to build robust AI applications without paying for access to closed models.
 
DeepSeek R1 still needs to clear several other tests and tasks to establish its competitiveness with existing players.
 
If it performs on par with or better than proprietary models while being freely available, existing players may be forced to lower their application programming interface (API) pricing or open-source more of their technology to stay relevant. This could lead to a price war in AI services.
 
Having said that, DeepSeek R1's release is bound to inspire a wave of open-source AI advancements, similar to how Stability AI and Meta's Llama models spurred competition. This could accelerate innovation in AI applications, making it easier for start-ups and independent developers to build competitive products.
 
For example, on a broader scale, China's AI sector is rapidly catching up with the West and DeepSeek R1's success could spark more government regulations and trade restrictions on AI technology.
 
It could also lead to tighter controls on the export of advanced chips and AI models. It can also create more AI research silos between China and the West. At the same time, greater AI investment in China will reduce reliance (read: dominance) on US technology.
 
Multinational corporations, like Microsoft, Amazon and IBM, which are integrating AI into their operations, may rethink their reliance on closed models if DeepSeek R1 provides comparable performance at a fraction of the cost. This possibly could also shift enterprise AI adoption toward self-hosted, open-source solutions. 
 
At the same time, what is more worrisome is the risk of fraudsters using these advanced AI models to create and rapidly spread their cons to common users.
 
Emerging, advanced, low-cost AI models and the increasing use of deepfakes are already making dangerous scams even more terrifying. 
 
AI, machine learning (ML) and programs such as large language learning models (LLMs) are quite good and capable of easily taking care of users' most repetitive tasks. However, it is the tools created with these technologies that we need to worry about. 
 
Open-source AI models make it easier for anyone—including bad actors—to deploy powerful AI without oversight. This could increase risks related to misinformation and deepfake creation, AI-driven cyber threats and lack of accountability for AI misuse.
 
AI, ML and LLM models and tools will help fraudsters pull off more advanced phishing attacks which will be harder to detect.
 
Cybercriminals could use these tools to automate hacking attempts, including writing malicious code that bypasses traditional security measures, automate social engineering to manipulate victims into revealing sensitive information and generate zero-day exploits faster than before.
 
With more advanced AI chatbots, fraudsters could engage victims in real time and sound more human. AI-powered fraudulent customer service agents might trick people into sharing sensitive data and get the job done with ease or even without raising suspicion. 
 
At the same time, just like fraudsters can benefit from AI, ML and LLM tools, common users can also utilise them for better cybersecurity. For example, companies and governments can use AI to detect phishing patterns before attacks spread. Remember, AI-powered email filters are in a much better position at spotting scams in real-time than humans.
 
AI can also help individuals and companies create more secure passwords and predict vulnerabilities. Individuals may even get access to free, open-source AI security tools to protect their devices. Not to forget, AI-powered antivirus and threat detection could also become more advanced.
 
The Indian government has already announced that DeepSeek will be hosted on Indian servers after security protocol checks so that users, coders, and developers can benefit from its open-source code. "Scientists, researchers, developers and coders are working on multiple foundational models in this regard and with the given pace, the Indian AI model is likely to be ready within six months. The AI model is beginning with the computation facility of roughly 10,000 graphics processing units (GPUs). Soon, the remaining 8,693 GPUs will be added. It will largely benefit researchers, students and developers in the beginning," says Ashwini Vaishnaw, Union minister for electronics and information technology (MeitY).
 
GPUs are used to train and deploy complex AI models by performing large numbers of calculations simultaneously.
 
DeepSeek R1's impact depends on how well it performs in real-world applications and whether it can sustain its open-source model while competing with giants like OpenAI and Google. If successful, it could fundamentally reshape the AI industry, forcing a shift towards more open, affordable and globally competitive AI development.
 
Till that time, Stay Alert and Stay Safe!
 
You may also want to read…
 
 
 
Comments
Fraud Alert: Dangers of Pre-installed Apps
Yogesh Sapkale, 24 January 2025
There is a new clipping going viral on social media about a Bengaluru techie losing Rs2.8 crore after using a mobile phone he 'won in a lottery' (he did not even participate in the lottery!). From the complaint that he filed with the...
Fraud Alert: How They Killed PlugX Malware!
Yogesh Sapkale, 17 January 2025
While dealing with any sudden incident, almost all government authorities in India come up with knee-jerk reactions. Most of the time, their response is not just vague but provides no solution to the incident or issue. Take, for...
Fraud Alert: 'Be Your Own Boss' Job Scams
Yogesh Sapkale, 10 January 2025
Yet another con was successfully executed by fraudsters in the name of quick and bumper returns through investment in the jewellery business. This is the story of the Torres, a scam that rocked Mumbai this week and was run through a...
Fraud Alert: Google Prompt and Authenticator Scams
Yogesh Sapkale, 03 January 2025
Adam Griffin from Seattle in the US is still in disbelief over how quickly he was robbed of nearly US$500,000 in cryptocurrencies. A scammer called him using a real Google phone number to warn his Gmail account was being hacked, sent...
Free Helpline
Legal Credit
Feedback