
Generative AI is making chatbots (almost) expert conversationalists.
Their ability to chat in a human and engaging manner means that humans are chatting back. But do people trust AI? What are people telling chatbots? And what data are chatbots gathering?
There are now AI-powered chatbots for almost everything, from chatting on Meta or Snapchat to chatbots for product or entertainment recommendations to chatbots for serious healthcare applications like therapy.
These chatbots are already being used extensively by consumers. ChatGPT has over 150 million unique users. Around 20% of Snapchat’s monthly user base, 150 million consumers, were using its AI bot in the two months after its launch in February 2023, making it one of the most popular social chatbots. These users sent a total of around 10 billion messages to the bot during that period, and, incidentally, these messages would be used by Snapchat to develop its advertising business.
What Are People Talking to Chatbots About?
Snapchat
Snapchat found people were looking for recommendations and learning opportunities and discovered:
- 6 million conversations asking for art and design inspiration
- 5 million conversations looking for recommendations for tourist destinations
- 8 million conversations about pizza
- 12 million conversations about skincare, makeup, nail care, fragrance, sunscreen, and other cosmetics
- 16 million conversations about clothing
- 25 million conversations about pets
- 65 million conversations about cars
- And 46 million conversations about soccer!
Meta
Snapchat and Meta are really leading the foray into social chatbots, but other platforms, like TikTok, have their own bots. Mark Zuckerberg introduced Meta chatbots in September 2023, these are “virtual friends” represented mostly by familiar celebrity faces available in Instagram and Facebook direct messages to answer questions. A Bloomberg author found them not all that great.
Dating
Yes, there are numerous sites that offer AI sexbots, and we’re not going to discuss what information is being shared with those bots!
Replika, an AI “companion,” had millions of users, but after a clampdown from authorities, it disabled some of the bot’s functionality overnight and left some users so distraught on Reddit that moderators posted suicide-prevention information.
There are restrictions on AI dating bots. The Apple Store, for example, restricts these bots, but the expectation is that, with loneliness a global issue, the number of people actually “dating” AI bots is going to grow.
Therapy
According to the BBC, a bot called Psychologist created on Character.ai has received over 78 million messages. The bot is described as “someone who helps with life difficulties,” and some Reddit users have posted glowing reviews, but the bot isn’t an accredited therapist.
An AI service in the UK, Limbic Access, has been given UK medical device certification by the UK government and is used by NHS trusts to classify and triage patients.
Chatting with Characters, Celebrities and Icons
There are many other user-created chatbots on Character.ai taking the form of popular “characters” and often interacting in the same way they would, offering fans a chance to “talk” to their icons.
In August 2023, Character.ai said users were spending an average of two hours per day with its chatbots. The company is rolling out group chats to paid users with up to five AI characters and five humans. According to Time, some site users have admitted to growing reliance on the site. Character.ai emphasizes that it displays “Remember: Everything Characters say is made up!” above every chat.
Revealing Secrets and Biases
Although there is limited data, except for perhaps in the depths of ChatGPT and its competitors, people are telling chatbots anything from their immediate frustrations to their hopes, desires, and political standpoints.
Human conversations naturally reveal a lot of personal information, so conversations with chatbots, intentionally or unwittingly, can actually reveal plenty of information to a bot.
What Data is Being Collected by Chatbots?
Many chatbots will be collecting user conversations and learning from them, some will be set up to use this information for advertising and other means. Other bots might not collect or infer any user information at all. To know, one must understand the workings and terms of use of each bot, model, site, or developer.
Snapchat’s My AI, for example deletes conversations after 30 days but does use them to train the AI model. The bot has denied having access to user locations but can quickly provide local recommendations if asked, per reports. Its activity can depend on a user’s settings and Snapchat’s terms and guidelines.
Wired, in October 2023, warned chatbots can guess or infer users’ personal information from “innocuous chats,” and the ability could be used by scammers or to target ads.
Personal information bots could easily infer race, location, and occupation, for example. Martin Vechev, a computer science professor at ETH Zurich in Switzerland, calls the issue “problematic” because it could herald a new era of advertising where chatbots build detailed profiles of users. Zurich researchers tested OpenAI, Google, Meta, and Anthropic. The article concludes by mentioning how large language models (LLMs) have also been known to sometimes leak specific personal information.
In a recent Guardian article, AI expert Mike Wooldridge warned that telling ChatGPT “work gripes or political preferences could come back to bite users.” The Oxford professor says sharing private information or “having heart-to-hearts,” per the newspaper, is “extremely unwise” because the conversations are used to help train future versions. He also warns that the technology “tells you what you want to hear,” and AI has no empathy or sympathy. OpenAI says conversations started when chat history is disabled isn’t used to train or improve models.
CustomGPT.ai supports GDPR compliance by providing straightforward information about data collection and use, protecting user data, and providing rights to data access and deletion. Discover how CustomGPT.ai uses your data when you build a custom ChatGPT-type bot.
Frequently Asked Questions
Why do people share highly personal details with chatbots so quickly?
A major driver is conversational design: modern AI chatbots respond in a human-like, engaging way, so people naturally talk back in detail. Usage scale also suggests strong comfort with chat interactions (for example, very large user and message volumes). The provided evidence shows broad willingness to engage, even across personal use cases, though it does not isolate a single psychological cause.
What kinds of information are users sharing with chatbots right now?
The provided data highlights practical and lifestyle topics at scale, including art/design inspiration and travel recommendations. It also shows high-volume everyday chatting. The source excerpt does not provide a validated breakdown of which sensitive personal details appear first, so teams should not assume low-risk topics are the only content users share.
Are chatbot conversations private by default, or can they be reviewed and reused?
They are not automatically private by default. One explicit example in the source states that chatbot messages were used to develop an advertising business. That means chat data may be reused beyond immediate responses, so you should check data-use terms before sharing sensitive details.
How can you reduce harmful oversharing when you deploy a chatbot?
A practical first step is transparency before users begin chatting: clearly state what data is collected and how it may be used. This is important because real-world chatbot deployments can repurpose messages for broader business uses. You can also keep prompts narrowly task-focused so users are less likely to volunteer unnecessary personal details.
Does heavy chatbot usage mean people fully trust AI with personal information?
Not necessarily. Large usage numbers show people are willing to use chatbots, but willingness is not the same as informed trust. The same source that reports very high engagement also shows messages can be reused for business purposes, which is a privacy tradeoff many users may not expect.
Is it safer to discuss personal issues with a general AI chatbot or a domain-specific chatbot?
The provided evidence does not give a direct safety ranking between general and domain-specific bots. It does show chatbot use ranges from casual recommendations to serious healthcare-related contexts. For personal issues, prioritize tools with clear data-use disclosures and share only the minimum necessary personal information.