Everyone who runs any business, big or small, wants to ride the artificial intelligence (AI) wave of using chatbots to help resolve consumer issues. Such interactions with AI chatbots often leave consumers frustrated by their bland, repetitive and standard responses. Then again, human customer care executives often behave the same and provide similar responses.
The fact is that all business entities are commissioning the creation of AI chatbots for anything and everything. For example, in February, we witnessed an avalanche of apps with romantic AI chatbots, sometimes without even a proper website. Over the years, humans have dreamt about robots to fulfil our emotional needs and it has been a part of pop literature. AI seems to be speeding up that dream!
AI chatbots, an artificial intelligence program, are designed to simulate interactions and relationships with users. These chatbots often employ natural language processing and machine learning techniques to engage in conversations that mimic human interactions. Romantic AI chatbots can mimic flirtation, compliments, and emotional support.
While new developments in AI, machine learning (ML) and large language models (LLMs) are breaking new barriers daily, experts are also finding the dark side of AI chatbots.
A
report from Mozilla found that the recent explosion of romantic AI chatbots is creating a whole new world of privacy concerns. The findings are startling, with 10 out of the 11 chatbots tested failing to meet even the Mozilla minimum security standards, including strong passwords or handling security vulnerabilities.
With estimated downloads of more than 100mn (million) on the Google Play Store and the recent opening of OpenAI's app store witnessing a flux of romantic chatbots, this problem will only grow massively.
Mozilla's researchers say, "In their haste to cash in, it seems like these rootin’-tootin’ app companies forgot to address their users' privacy or publish even a smidgen of information about how these AI-powered LLMs—marketed as soulmates for sale—work. We are dealing with a whole another level of creepiness and potential privacy problems. With AI in the mix, we may even need more 'dings' to address them all."
All 11 romantic AI chatbots reviewed by the researchers earned their 'privacy not included' warning label—putting them on par with the worst categories of products they have ever reviewed for privacy.
"To be perfectly blunt, AI girlfriends are not your friends. Although they are marketed as something that will enhance your mental health and well-being, they specialise in delivering dependency, loneliness, and toxicity, all while prying as much data as possible from you," says Misha Rykov, a researcher at Mozilla.
According to Mozilla, romantic AI chatbots are bad at privacy in disturbing new ways. "They can collect a lot of (really) personal information about you… But, that is exactly what they are designed to do! Usually, we draw the line at more data than is needed to perform the service, but how can we measure how much personal data is 'too much' when taking your intimate and personal data is the service?"
While romantic AI chatbots can provide companionship and entertainment for some users, there are potential risks and negative consequences associated with their use. This includes emotional dependency, unrealistic expectations, privacy and security concerns, ethical considerations and social isolation, among others.
While romantic AI chatbots can offer temporary companionship and entertainment, users should be mindful of the potential risks and limitations associated with these interactions. It is essential to maintain a healthy balance between virtual and real-world relationships and to approach AI interactions with caution and critical thinking.
Remember, of all the romantic AI chatbots reviewed by Mozilla, none got its stamp of approval and all come with a warning: *privacy not included.
Mozilla says it found little to no information about how AI chatbots work. "How does the chatbot work? Where does its personality come from? Are there protections in place to prevent potentially harmful or hurtful content, and do these protections work? What data are these AI models trained on? Can users opt out of having their conversations or other personal data used for that training?"
"We have so many questions about how the artificial intelligence behind these chatbots works. But we found very few answers," researchers at Mozilla say. "That is a problem because bad things can happen when AI chatbots behave badly. Even though digital pals are pretty new, there is already a lot of proof that they can have a harmful impact on humans' feelings and behaviour. One of
Chai's chatbots reportedly encouraged a
man to end his own life. And he did. A
Replika AI chatbot encouraged a man to try to
assassinate the Queen. He did."
Even after reading about romantic AI chatbots, if you want to try them out, here are a few suggestions to follow to safeguard your privacy and data.
The most crucial suggestion is not to say things to the romantic AI chatbot that you would not want your family, friends and colleagues to read. So, be cautious while sharing personal information and set boundaries with AI chatbots to avoid becoming emotionally invested in their responses.
Mozilla found that people are sharing their most intimate thoughts, feelings, photos and videos with their 'AI soulmate' on an app that not only records that information but potentially offers it up for sale to data brokers.
While it can be tempting to engage in prolonged conversations with AI chatbots, limit the time you spend interacting with them.
If you choose to interact with AI chatbots, use reputable platforms and services that prioritise user privacy and security (at present, there is none).
Stay informed about the latest developments in AI technology and the potential risks associated with these AI chatbots.
Stay Alert, Stay Safe!