Begin typing your search...

Meta AI Bug May Have Exposed Users’ Private Prompts and AI-Generated Content: Security Flaw Fixed, But Raises Fresh Privacy Concerns

Meta AI bug exposed users’ private prompts and responses. Discovered in Dec 2024, the flaw has been patched, but privacy concerns persist.

image for illustrative purpose

Meta AI Bug May Have Exposed Users’ Private Prompts and AI-Generated Content: Security Flaw Fixed, But Raises Fresh Privacy Concerns
X

16 July 2025 12:56 PM IST

Meta, already under the scanner for its inconsistent privacy practices, recently patched a critical bug in its AI chatbot that could have exposed users' private prompts and AI-generated content to other users. The issue was flagged by cybersecurity researcher Sandeep Hodkasia in December 2024.

🔍 What Was the Meta AI Bug?

According to a report by TechCrunch, Hodkasia — the founder of AppSecure — identified the flaw and was awarded $10,000 under Meta’s Bug Bounty Program. The vulnerability was reported on December 24, 2024, and Meta deployed a fix exactly a month later, on January 24, 2025.

⚠️ How the Bug Worked

Hodkasia discovered that when a logged-in user edits their AI prompt or regenerates text/images through Meta’s chatbot, the system assigns a unique numerical identifier to that prompt-response pair. However, due to a backend flaw, if a user manually altered this unique ID, they could access another user’s AI prompt and content.

The main concern? These IDs were predictable and guessable, making unauthorized access highly possible — though no actual misuse has been reported so far.

🛡 Meta's Response and User Warning

Meta confirmed the issue and stated that the bug has been fixed, with no evidence of malicious exploitation. However, this incident has reignited the ongoing debate about Meta’s inadequate security measures and lack of robust authorization checks.

“The bug highlights a serious oversight in how Meta’s servers verified access permissions,” said Hodkasia.

While the company insists that the situation is under control, experts suggest users should remain cautious while sharing sensitive prompts or personal data with Meta AI tools, as more bugs could potentially arise in future updates.

📌 Key Takeaways:

  • A Meta AI bug could have let users see others’ private prompts and responses.
  • Discovered in December 2024, the bug was fixed by January 2025.
  • No misuse was found, but weak server-side checks and guessable IDs raised major concerns.
  • Meta urges users to stay vigilant while interacting with AI tools.
Meta AI bug Meta privacy breach AI chatbot security flaw Meta bug bounty Meta AI vulnerability leaked AI prompts Meta AI data security Meta user data exposed AI chatbot bug 2025 Meta AI privacy issue 
Next Story
Share it