In this week’s edition of the ThreatsDay Bulletin, cybersecurity researchers warn of a rapidly shifting threat landscape where traditional attack vectors are fused with advanced stealth techniques, artificial intelligence (AI) abuse, and innovative exploitation methods. From stealthy loaders used to distribute versatile malware to critical flaws in AI chatbots that could lead to remote code execution or data theft, this week’s findings emphasize how threat actors continue to blend legitimate tools and everyday technologies with malicious intent.

Cybersecurity is no longer just about defending against obvious threats like phishing or ransomware — attackers increasingly rely on precision, automation, and exploitation of trust to infiltrate systems and evade detection. Here’s a comprehensive breakdown of the week’s most critical stories, key vulnerabilities, emerging tactics, and recommended defensive strategies.

1. Fake PoC Exploits Deliver Stealthy Backdoor WebRAT

One of the most concerning developments reported this week involves fake proof‑of‑concept (PoC) exploits published in code repositories that target inexperienced professionals and students in the information security field. These repositories include detailed documentation of real vulnerabilities — such as CVE‑2025‑59295, CVE‑2025‑10294, and CVE‑2025‑59230 — to gain trust before tricking victims into downloading a ZIP archive that drops a backdoor called WebRAT.

The executable inside the ZIP (rasmanesc.exe) is capable of:

  • Escalating privileges on Windows systems,
  • Disabling Microsoft Defender, and
  • Fetching the WebRAT backdoor from an external server.

WebRAT not only provides remote control of infected machines but also steals data from popular applications such as Telegram, Discord, Steam, and even cryptocurrency wallets. The data‑stealing capability makes this threat particularly potent and highlights how malicious actors are now weaponizing even professional‑looking PoC code to trick developers and security trainees.

2. AI Chatbot Flaws Expose Systems to Prompt Injection and Data Theft

Artificial intelligence — once hailed as a defender in cybersecurity — is itself becoming a target. This week, multiple critical vulnerabilities were disclosed in public AI chatbot implementations, including a major flaw in Eurostar’s public chatbot that has since been patched.

The key issue stems from how chatbots handle user input history — by relaying entire conversations to the backend API while performing safety checks only on the latest message. This opens the door to prompt injection attacks, where malicious actors manipulate earlier parts of a conversation to trigger unintended behavior in the AI and bypass guardrails.

Security firm Pen Test Partners noted that attackers could:

  • Steer the AI model to exfiltrate sensitive information,
  • Inject executable HTML or JavaScript into chat interfaces, and
  • Exploit insufficient input validation to manipulate chatbot responses.

This vulnerability underscores that old‑school web security flaws — such as improper input validation — still apply even when AI is in the mix. Threat actors can exploit these oversights to subvert protections or impersonate legitimate chatbot behavior for malicious gain.

3. Critical Zero‑Days in Open‑Source Infrastructure Components

A competition organized by cybersecurity firm Wiz and zeroday.cloud revealed 11 critical zero‑day vulnerabilities in widely used open‑source infrastructure software — including container runtimes, AI infrastructure libraries like vLLM and Ollama, and databases such as Redis, PostgreSQL, and MariaDB.

Among the most severe discovered is a container escape flaw that undermines one of the core security assumptions of cloud computing: that workloads running in containers remain isolated from each other. If successfully exploited, this vulnerability allows attackers to break out of a container and access the underlying host infrastructure, potentially compromising multiple tenants in shared cloud environments.

The discovery highlights a troubling trend — as infrastructure becomes more complex and distributed, critical vulnerabilities that affect foundational layers of cloud and container ecosystems can have cascading impacts across systems, services, and users.

4. Persistent Malware Loaders Target Governments and Manufacturing

Cybersecurity firm Cyble reported a new phishing campaign targeting manufacturing and government organizations in Italy, Finland, and Saudi Arabia.

Unlike many single‑purpose malware drop campaigns, this operation uses a unified, commodity loader capable of distributing a wide array of malware families — including PureLogs, XWorm, Katz Stealer, DCRat, and Remcos RAT.

The infection chain involves:

  • Weaponized Office documents exploiting old vulnerabilities like CVE‑2017‑11882,
  • Malicious SVG files with embedded scripts, and
  • ZIP archives containing LNK shortcuts that launch the loader.

Remarkably, the campaign employs steganographic techniques — hosting image files containing hidden malware code on legitimate platforms — to evade file‑based detection tools and blend malicious traffic into normal use patterns.

The diversity of delivery vectors and malware types suggests that this loader is either shared across multiple threat groups or sold as part of a malware‑as‑a‑service (MaaS) ecosystem.

5. Docker’s AI Assistant “Ask Gordon” Patched Against Prompt Injection

The popular containerization platform Docker addressed a security issue in its embedded AI assistant, Ask Gordon, which could have allowed threat actors to perform prompt injection attacks by poisoning metadata from Docker Hub repositories.

Attackers could craft a malicious repository description containing hidden instructions that the AI assistant would interpret and execute, leading to:

  • Exfiltration of sensitive data,
  • Automatic fetching of additional payloads from attacker‑controlled servers, and
  • Potential execution of arbitrary commands without user awareness.

In response, Docker released patches to tighten how the AI assistant interprets repository metadata and prevent automatic execution of unverified AI instructions.

This incident highlights how AI integrations in developer tooling can inadvertently introduce new attack surfaces, particularly when the system trusts content fetched from public repositories.

6. Targeted Phishing Attacks Hit Israeli Organizations

Researchers from Seqrite Labs identified a threat cluster — likely based in Western Asia — targeting organizations in Israel, including IT firms, MSPs (Managed Service Providers), HR teams, and software developers.

This campaign uses phishing lures in Hebrew that impersonate internal communications and employ a multi‑stage infection process. The initial email delivers a PDF attachment that directs users to a Dropbox link hosting malware.

Analysis showed:

  • Use of brand spoofing impersonating security vendors such as SentinelOne and Check Point,
  • Redirection to a fake Cloudflare landing page, and
  • Attempts to exploit a legacy Chrome vulnerability (CVE‑2018‑6065) with a JavaScript shellcode loader.

The campaign dynamically tailors payloads to each target, indicating careful reconnaissance and customization — hallmarks of advanced phishing operations.

Trends Driving 2025’s Cyber Threat Landscape

Beyond specific threats, this week’s ThreatsDay Bulletin reveals broader patterns reshaping cybersecurity:

Attackers Blending In

Threat actors increasingly avoid loud, conspicuous attacks. Instead, they embed malicious code within trusted tools, libraries, or services to evade detection and sustain persistent access.

AI: Both Exploit Target and Weapon

  • Adversaries use AI to create more convincing phishing, generate stealth loaders, and craft malware that adapts to defense mechanisms.
  • At the same time, AI integrations in chatbots, code repositories, and developer tools can be exploited if trust boundaries aren’t strictly enforced.

Infrastructure and Cloud Remain High‑Value Targets

From container escape flaws to orchestrated phishing campaigns aimed at enterprise targets, attackers are focusing on software supply chains, cloud environments, and shared infrastructure — areas where a single breach can scale with catastrophic impact.

How Organizations Should Respond

Given these evolving tactics, defenders must adapt beyond traditional antivirus and firewall solutions. Recommended strategies include:

Harden Detection and Response

  • Deploy behavioral analytics and endpoint detection response (EDR) tools capable of spotting subtle deviations indicative of stealth loaders or AI misuse.
  • Monitor for indicators of compromise (IOCs) related to unusual application behavior, unauthorized repository activity, or anomalous AI assistant actions.

Secure AI Integrations

  • Treat AI‑generated outputs as untrusted input until validated.
  • Implement robust input validation, sanitization, and guardrails in AI‑powered interfaces — particularly those that interact with developer tooling or production data.

Improve User Awareness

  • Train staff and developers to recognize fake PoC code repositories, phishing emails, and social engineering campaigns.
  • Encourage security knowledge sharing between teams to reduce the risk posed by “trusted” attacks targeting professionals and students.

Patch Aggressively

  • Prioritize patching not only operating systems and critical applications but also open‑source infrastructure components, container runtimes, and AI assistance frameworks.

Comprehensive Defense in Depth

  • Use zero‑trust networking, multi‑factor authentication, and least‑privilege access controls.
  • Continuously audit and update security policies to respond to chained attack vectors involving AI misuse, supply chain exploitation, or container breakouts.

Final Thoughts

This week’s ThreatsDay Bulletin shows that cyber threats are evolving faster than ever, blending stealth, AI manipulation, and traditional exploitation methods. From misleading PoC repositories to exploited AI assistants and sophisticated phishing campaigns, defenders must adopt a multi‑layered security posture that anticipates these blended tactics rather than reacting to them.

The takeaway for 2026 is clear: cybersecurity isn’t just about blocking known threats — it’s about understanding how attackers leverage trust, automation, and everyday tools to bypass detection and exploit weaknesses that are often overlooked. The more defenders learn about the adversary’s evolving playbook, the better prepared they will be to detect, defend, and deter the next wave of sophisticated cyberattacks.