Generative AI is making chatbots (almost) expert conversationalists.
Their ability to chat in a human and engaging manner means that humans are chatting back. But do people trust AI? What are people telling chatbots? And what data are chatbots gathering?
There are now AI-powered chatbots for almost everything, from chatting on Meta or Snapchat to chatbots for product or entertainment recommendations to chatbots for serious healthcare applications like therapy.
These chatbots are already being used extensively by consumers. ChatGPT has over 150 million unique users. Around 20% of Snapchat’s monthly user base, 150 million consumers, were using its AI bot in the two months after its launch in February 2023, making it one of the most popular social chatbots. These users sent a total of around 10 billion messages to the bot during that period, and, incidentally, these messages would be used by Snapchat to develop its advertising business.
Snapchat found people were looking for recommendations and learning opportunities and discovered:
Snapchat and Meta are really leading the foray into social chatbots, but other platforms, like TikTok, have their own bots. Mark Zuckerberg introduced Meta chatbots in September 2023, these are “virtual friends” represented mostly by familiar celebrity faces available in Instagram and Facebook direct messages to answer questions. A Bloomberg author found them not all that great.
Yes, there are numerous sites that offer AI sexbots, and we’re not going to discuss what information is being shared with those bots!
Replika, an AI “companion,” had millions of users, but after a clampdown from authorities, it disabled some of the bot’s functionality overnight and left some users so distraught on Reddit that moderators posted suicide-prevention information.
There are restrictions on AI dating bots. The Apple Store, for example, restricts these bots, but the expectation is that, with loneliness a global issue, the number of people actually “dating” AI bots is going to grow.
According to the BBC, a bot called Psychologist created on Character.ai has received over 78 million messages. The bot is described as “someone who helps with life difficulties,” and some Reddit users have posted glowing reviews, but the bot isn’t an accredited therapist.
An AI service in the UK, Limbic Access, has been given UK medical device certification by the UK government and is used by NHS trusts to classify and triage patients.
There are many other user-created chatbots on Character.ai taking the form of popular “characters” and often interacting in the same way they would, offering fans a chance to “talk” to their icons.
In August 2023, Character.ai said users were spending an average of two hours per day with its chatbots. The company is rolling out group chats to paid users with up to five AI characters and five humans. According to Time, some site users have admitted to growing reliance on the site. Character.ai emphasizes that it displays “Remember: Everything Characters say is made up!” above every chat.
Although there is limited data, except for perhaps in the depths of ChatGPT and its competitors, people are telling chatbots anything from their immediate frustrations to their hopes, desires, and political standpoints.
Human conversations naturally reveal a lot of personal information, so conversations with chatbots, intentionally or unwittingly, can actually reveal plenty of information to a bot.
Snapchat’s My AI, for example deletes conversations after 30 days but does use them to train the AI model. The bot has denied having access to user locations but can quickly provide local recommendations if asked, per reports. Its activity can depend on a user’s settings and Snapchat’s terms and guidelines.
Wired, in October 2023, warned chatbots can guess or infer users’ personal information from “innocuous chats,” and the ability could be used by scammers or to target ads.
Personal information bots could easily infer race, location, and occupation, for example. Martin Vechev, a computer science professor at ETH Zurich in Switzerland, calls the issue “problematic” because it could herald a new era of advertising where chatbots build detailed profiles of users. Zurich researchers tested OpenAI, Google, Meta, and Anthropic. The article concludes by mentioning how large language models (LLMs) have also been known to sometimes leak specific personal information.
In a recent Guardian article, AI expert Mike Wooldridge warned that telling ChatGPT “work gripes or political preferences could come back to bite users.” The Oxford professor says sharing private information or “having heart-to-hearts,” per the newspaper, is “extremely unwise” because the conversations are used to help train future versions. He also warns that the technology “tells you what you want to hear,” and AI has no empathy or sympathy. OpenAI says conversations started when chat history is disabled isn’t used to train or improve models.
CustomGPT supports GDPR compliance by providing straightforward information about data collection and use, protecting user data, and providing rights to data access and deletion. Discover how CustomGPT uses your data when you build a custom ChatGPT-type bot.