Lovable Users Report Leak of Chats, Code, Credentials

A fresh warning from developer Morgan Linton says free Lovable accounts can still read other users' AI chat histories, source code, and database credentials on projects created before November 2025. The pattern is the same one that earned the platform CVE-2025-48757 last year.

Lovable Users Report Leak of Chats, Code, Credentials

A Lovable user signs up for a free account. They open the dashboard. They start typing the kind of query any new customer might try - a list of recent projects, a sample prompt, a handful of schema inspections. The data that comes back is not theirs.

That's the claim circulating since early Monday morning, when entrepreneur and developer Morgan Linton posted a public warning to Lovable's customers. He says a free account was enough to pull back AI chat histories, source code, database credentials, and customer records belonging to other people's apps - specifically the apps built on the platform before the November 2025 schema upgrade. Lovable has not publicly confirmed or denied the post.

If the warning holds up, this is the third time in twelve months that Lovable has been tied to the same structural defect: Supabase tables shipped without Row Level Security, an AI coding pipeline that would rather get code working than make it safe, and a blast radius that extends to every customer of every app a Lovable user ever deployed.

TL;DR

  • Morgan Linton reports that a free Lovable account can read AI chats, source, credentials, and user data from other projects created before Nov 2025
  • Lovable hasn't publicly confirmed the report; several independent developers on X and Hacker News say they reproduced the access within minutes
  • The fingerprint matches CVE-2025-48757, the missing-RLS flaw disclosed by researcher Matan Getz in May 2025
  • Lovable patched new code generation in 2025 but projects created before the fix inherited the insecure defaults
  • Wiz independently found 20% of vibe-coded apps carry a critical exposure

What users are describing

The exposure follows a specific, reproducible script. Create a Lovable account. Open any project's Supabase connection. Issue the same queries the Lovable agent would issue for a user. Because Row Level Security was never set on the tables the agent generated, Supabase returns every row it can see. Not just yours.

Linton's thread includes screenshots of another developer's messages table and an api_keys row that carried what looked like an OpenAI secret. Replies from other users claim the same thing happened to them. None of those replies constitute independent forensic proof, but the pattern matches the disclosure template we already have on file for this platform.

The SQL shape of the bug has been public since May 2025. In the simplest form, it looks like this:

-- What Lovable generated pre-fix
create table public.user_messages (
  id uuid primary key default gen_random_uuid(),
  user_id uuid references auth.users,
  content text,
  created_at timestamptz default now()
);
-- No ALTER TABLE ... ENABLE ROW LEVEL SECURITY
-- No CREATE POLICY

A Supabase project's anon key is meant to be public. It sits in every browser that loads the front end. The whole safety model assumes that RLS is active on every table that anon can reach. Without it, any client - including a freshly created Lovable account using its own credentials to introspect - can read the whole table.

This is the same class of mistake that hit Firebase in 2018 and MongoDB atlases in 2017. The vibe-coding twist is that the AI agent is the one writing the CREATE TABLE statements. It doesn't write the ENABLE ROW LEVEL SECURITY line unless you ask.

Padlock resting on crumpled paper, symbolising a broken default. Row Level Security is off by default in Supabase. The AI that produced the schema didn't turn it on. Source: unsplash.com

The paper trail on Lovable's RLS problem

This isn't a one-off. It's the repeat of a bug Lovable has been publicly warned about four times in a year.

  • May 2025 - CVE-2025-48757. Independent researcher Matan Getz of Replica Security discovered that Lovable-generated projects were shipping without RLS. A follow-up scan identified 303 unprotected endpoints across 170 production apps. Semafor summarised the findings with the headline "The hottest new vibe coding startup Lovable is a sitting duck for hackers."
  • October 2025 - Escape methodology scan. Security firm Escape ran an automated scan against 5,600 vibe-coded apps and flagged more than 2,000 high-impact vulnerabilities and 400 exposed secrets. Lovable-built apps were the single largest slice.
  • February 2026 - Lovable app leaks 18,000 records. The Register documented a single Lovable-built application - an exam grading platform featured on Lovable's own Discover page - that leaked the private data of every one of its users after researcher Taimur Khan walked the same RLS path. Lovable's response framed it as a customer misconfiguration.
  • March 2026 - malicious hosting. BleepingComputer reported that attackers were abusing Lovable's infrastructure to host phishing and credential harvesters. Different failure mode, same theme: weak defaults made exploitation cheap.

We covered the broader pattern already in our piece on 69 vulnerabilities across 5 AI coding tools, and in our audit of Lovable's business where the platform crossed $400M ARR with 146 employees. At $2.7M of revenue per employee, there's not a lot of headcount left over for security review.

What's Not Confirmed

A few things are worth holding lightly until the picture clarifies.

  • The scope. Linton's thread implies any pre-November-2025 project is exposed. Independent verification has shown individual apps are exposed, but not a platform-wide enumeration. The 170-app figure from CVE-2025-48757 is the best available floor, not a ceiling.
  • Lovable's stance. The company's official secure vibe coding post describes a scanner that "detects" missing RLS but doesn't retroactively enforce it. No public advisory has been issued for this April warning.
  • Blame allocation. Supabase's RLS documentation is clear that policies are the developer's responsibility. Lovable positioned itself as "the developer" for users who can't write SQL. The contract there has never been legally tested.

Action items if you shipped on Lovable before November 2025

  1. Go to your Supabase dashboard. For every table under the public schema, run select rowsecurity from pg_tables where tablename = '<name>'. If the answer is f, your table is open.
  2. Assume AI chat histories, prompts, and any cached API responses in the messages or conversations table are already compromised. Rotate every API key those chats could have referenced.
  3. Rotate the Supabase anon key and service role key via the dashboard. Yes, this will break your app's client - that's the point.
  4. For every table you expose, write a policy. The minimum is create policy "owner only" on <table> for all using (auth.uid() = user_id).
  5. If you can't reconstruct which user owned which row because your schema never had a user_id foreign key, the table has to come down. There's no way to retrofit ownership to historical rows.
  6. Tell your users. The EU AI Act and GDPR treat a missing access control as a data breach, regardless of whether any record was actually read.

The guidance above isn't novel. It's the same checklist every Supabase security post has published for two years. The newsworthy part is that a platform sold explicitly to non-developers still ships without it, still carries legacy projects that were never patched, and still hasn't said anything about the current report.


The vibe-coding movement's core bet is that the agent can do the boring parts - the schema, the auth, the access control - so the user can focus on the idea. That bet has been under pressure for a year. Today, if Linton's observation holds, it is under pressure again. We have asked Lovable for comment and will update this story when the company responds.

Sources:

Lovable Users Report Leak of Chats, Code, Credentials
About the author Senior AI Editor & Investigative Journalist

Elena is a technology journalist with over eight years of experience covering artificial intelligence, machine learning, and the startup ecosystem.