News

300 Million Private AI Chat Messages Leaked by a Single Misconfigured Database

Chat & Ask AI, a popular chatbot wrapper app with 50 million users, left its Firebase database wide open - exposing 300 million messages including suicide discussions, drug recipes, and medical conversations to anyone who knew where to look.

300 Million Private AI Chat Messages Leaked by a Single Misconfigured Database

Every conversation you had with the AI - every question about your health, your finances, your darkest thoughts - was sitting in an open database that anyone could read. No hack required. No password needed. Just a misconfigured Firebase instance and 300 million messages there for the taking.

Chat & Ask AI, a chatbot wrapper app with over 50 million users, left its entire backend database exposed to the public internet. A security researcher found it. So could have anyone else.

What Happened

An independent security researcher known as Harry, operating through an organization called CovertLabs, built an automated scanning tool called Firehound that systematically tests iOS apps for Firebase misconfigurations. When he pointed it at Chat & Ask AI - listed on app stores as "Ask AI - Chat with AI Chatbot" - he found the database wide open.

The Firebase Security Rules had been left in a public state. That meant anyone with the project URL could read, modify, or delete the entire database without any authentication. No exploit. No zero-day. Just a configuration checkbox that was never set.

What he found inside:

  • ~300 million private messages from over 25 million users
  • Complete chat histories with timestamps
  • Which AI model each user selected (ChatGPT, Claude, or Gemini)
  • Custom chatbot names users had created
  • Model configuration settings
  • User files stored in the backend

Harry personally verified the scope by analyzing a sample of 60,000 users and over 1 million messages.

What People Were Saying to the AI

This is where it gets ugly. The leaked conversations contained exactly the kind of material people share with AI chatbots because they think nobody is watching:

  • How to painlessly commit suicide
  • How to write suicide notes
  • How to manufacture methamphetamine
  • How to hack other apps
  • Medical questions, mental health discussions, financial details

People treat AI chatbots as confessionals. This leak exposed those confessions to anyone with basic technical knowledge.

Who Built This App

Chat & Ask AI is developed by Codeway, an Istanbul-based mobile app company founded in 2020 by Anil Simsek and Tolunay Tosun. Codeway is not a small operation - the company has released over 60 apps with more than 300 million total downloads. Their other products include Wonder AI Art Generator (the most downloaded AI art app in 2023) and FaceDance.

Chat & Ask AI itself is a wrapper app - it does not run its own AI models. Users pick between ChatGPT, Claude, or Gemini, and the app routes their messages to those providers. The app has over 36 million downloads on Google Play alone, with a 4.45-star rating from 1.1 million reviews.

The app store listing operates under Deep Flow Software Services - FZCO, registered in Dubai Digital Park.

The Fix (and the Silence)

To Codeway's credit, they fixed the misconfiguration across all of their apps within hours of Harry's responsible disclosure around January 20. The apps were subsequently removed from Firehound's vulnerability registry.

But Codeway has issued no public statement about the breach. They did not respond to Fox News' request for comment. No blog post. No notification to affected users. 300 million messages were exposed, and the company has said nothing publicly about it.

It is also unknown how long the database was open before Harry found it. The misconfiguration could have been there since the app launched. Anyone who knew what to look for - and Firebase misconfigurations are one of the most well-known vulnerability classes in mobile development - could have been quietly reading those messages for months or years.

This Is Not an Isolated Case

Harry's Firehound scanning project found that 196 out of 198 examined iOS apps had some form of data exposure issue. 103 out of 200 apps had specifically exploitable Firebase misconfigurations, collectively exposing tens of millions of stored files. Chat & Ask AI was the worst offender, but it was far from the only one.

Firebase databases are secure by default. The exposure happens when developers explicitly set rules to public during development and never switch them back. It is one of the most preventable vulnerabilities in cloud development, and it keeps happening because apps are shipped faster than anyone bothers to check the security configuration.

The pattern extends beyond wrapper apps. Grok conversations showed up in Google search results. Microsoft Copilot had a bug letting the AI summarize confidential emails without permission. The assumption that AI conversations are private is, increasingly, a dangerous fiction.

The Uncomfortable Truth

Every AI chatbot you use stores your messages somewhere. The question is not whether those messages are recorded - they almost certainly are - but whether the company storing them has bothered to lock the door.

Chat & Ask AI had 50 million users and a 4.45-star rating. It looked legitimate. It routed messages to real AI models from real companies. And its entire database was sitting open on the internet, readable by anyone, containing every question 25 million people were too embarrassed to ask a human.

Firebase Security Rules have a default deny policy. Someone at Codeway explicitly changed that. And 300 million messages paid the price.

Sources:

About the author Senior AI Editor & Investigative Journalist

Elena is a technology journalist with over eight years of experience covering artificial intelligence, machine learning, and the startup ecosystem.