Cybersecurity researchers have identified a worrying escalation in the Nomani investment scam, a sophisticated fraudulent scheme that uses AI-generated deepfake advertisements and deceptive social media campaigns to lure victims into fake investment platforms. According to telemetry from Slovak cybersecurity firm ESET, the scam’s activity increased by around 62% year-over-year in 2025, as fraudsters enhanced their tactics with improved deepfake content, shorter ad lifecycles, and advanced social engineering techniques to evade detection and ensnare unsuspecting users. This surge reflects a broader trend of AI-assisted financial fraud proliferating across major social networks such as Facebook, YouTube, Instagram, and Threads — where attackers exploit platform features and advertising frameworks to reach large numbers of potential victims worldwide. What Is the Nomani Scam and How It Works The Nomani investment scam is a type of investment fraud in which cybercriminals create convincing ads and promotional content that appear to represent legitimate financial opportunities — often promising exceptionally high returns on investments in stocks, cryptocurrency, or other ostensibly secure assets. Traditionally, investment scams have depended on fake websites, phishing pages, and stolen social media identities. However, Nomani’s recent evolution leverages deepfake videos and AI-generated testimonials to create fabricated endorsements, sometimes mimicking public figures, influencers, or authoritative voices. These videos are integrated into sponsored posts or paid advertisements that appear genuine to the average user. Unlike earlier, more static scams, Nomani’s ads frequently: Use high-resolution AI generated content with better lip movement and audio-visual synchronization to reduce obvious cues of fakery. Exploit current events or topical personalities to lend false credibility. Run on multiple social platforms and rotate rapidly to avoid automated takedown. Use embedded forms and surveys within social media platforms to collect sensitive personal and financial data without redirecting users to clearly malicious external sites. Attacks often begin with a user seeing an impressive advertisement or deepfake endorsement claiming massive financial returns. When the victim clicks through, they are taken to a fabricated investment platform where they are asked to fund an account. Initially, the scam site may even display illusory “returns” to build trust — a classic component of so-called “pig butchering” or advance-fee scams — before making withdrawal impossible without paying additional fees, taxes, or “verification” charges. AI Is Central to Nomani’s Growth A key reason behind Nomani’s 62% growth in 2025 is the increasing use of artificial intelligence to enhance the realism and effectiveness of scam content. ESET’s threat data shows that not only have deepfake videos become more realistic, but phishing sites associated with the scam also show signs of being AI-generated or AI-assisted in design. Earlier in the scam’s timeline — documented extensively in 2024 — deepfake videos with noticeable artifacts and low-quality production could be spotted with careful scrutiny. However, recent improvements have included: Higher resolution video renderings that mask common deepfake flaws (e.g., unnatural blinking or mismatched lip movements). Improved audio-visual synchronization, making fake testimonials appear more credible. Use of AI-generated HTML templates for phishing pages, indicating that attackers are using AI not just as a visual trick but to scale up the construction of scam infrastructure itself. Attackers have also begun exploiting legitimate advertising tools — such as embedded forms, surveys, and in-platform data collection features — to harvest information like names, email addresses, phone numbers, and even financial details, all without right away sending users to external phishing pages. Platforms Targeted: Facebook, YouTube and Beyond Nomani campaigns were initially most prevalent on Facebook and Instagram, where scammers purchased inexpensive ad placements and exploited weaknesses in ad review systems to disseminate fraudulent content quickly. These platforms’ wide user bases and the capacity to micro-target ads based on interests and demographics made them ideal vectors for reaching potential victims. In 2025, Nomani operators broadened their distribution to include: YouTube video ads, often featuring AI-generated clips that run before or alongside content. Other social media properties where paid promotions and user-generated content can spread rapidly. Because many scam campaigns run only for short durations — sometimes just a few hours — they can evade both automated detection tools and manual review efforts by platform security teams, allowing them to rotate ads and domains to stay ahead of takedown efforts. Geographical Spread and Victim Profiles ESET telemetry indicates that the largest volumes of Nomani-related harmful URLs this year originated from—or were blocked in—countries including Czechia, Japan, Slovakia, Spain, and Poland. These regions, particularly those with high social media use and significant cryptocurrency interest, appear to have been primary targets for the scam’s growth. However, this is not exclusive — similar scams have been reported globally, and deepfake-enhanced ads for investment fraud consistently emerge in varied markets. Although precise data on total financial losses directly attributable to Nomani in 2025 is not publicly released in aggregated form, anecdotal reports and law enforcement filings suggest that victims across different countries have transferred substantial funds to these fraudulent platforms, often only discovering the scam after they are unable to withdraw their assets. Law Enforcement and Post-Scam Deception Tactics The lifecycle of Nomani scams often includes a secondary fraud phase after victims realize they’ve been cheated. Attackers sometimes re-target victims by: Promising assistance with recovering their stolen funds through faux organisations that claim affiliation with Europol or INTERPOL. Re-advertising “help” services that charge additional fees for recovery assistance, leading to second-stage financial loss. These tactics prey on victims’ hopes of recouping losses and exploit trust in law enforcement institutions to lure victims back into paying more money. Platform Responsibility and Tech Industry Challenges The rise of AI-assisted scams like Nomani poses a difficult dilemma for social media platforms and ad networks. On the one hand, ads are core revenue drivers for companies like Meta and YouTube. On the other, the prevalence of scams amounts to billions in fraudulent ad placements, undermining trust in digital advertising and causing real financial harm. In fact, recent media investigations revealed that a significant share of digital ad revenue — including potentially billions from regions with lax enforcement mechanisms — came from scam or prohibited content distributed through agency partnerships, highlighting the scale of the problem. Efforts to tighten advertising verification and remove scam ads have improved somewhat, but short-lived scam campaigns, domain rotation, and abuse of legitimate advertising features continue to challenge platform moderation systems. Why Nomani Scams Are Harder to Detect Several factors make AI-enhanced investment scams like Nomani difficult for individuals and automated systems to detect: 1. Realistic Deepfake Content AI tools can now produce videos and audio that closely mimic real individuals — including public figures, influencers, or known personalities — making it easier for unsuspecting users to trust scam ads. 2. Short Increased Campaign Lifetimes By rotating ads quickly and only running campaigns for short durations, scammers reduce the window of exposure to detection and takedown systems. 3. Use of Legitimate Platform Tools Rather than directing users immediately to external phishing sites, scammers increasingly use legitimate ad tools, embedded forms, and in-platform engagement mechanisms to harvest data without raising immediate red flags. 4. AI-Assisted Phishing Pages Some phishing templates and scam pages show signs of being generated with AI, improving their structural legitimacy and reducing obvious errors that typically tip off cautious users. Tips to Spot and Avoid Nomani-Style Scams As cybercrime evolves, users and organizations must stay vigilant. Experts recommend the following strategies to identify and avoid investment scams: ► Scrutinize Unsolicited Investment Offers If an ad or message promises exceptionally high returns with minimal risk — especially when tied to a viral video — treat it with scepticism. ► Verify Investment Platforms Always check whether the financial service or investment platform is regulated by recognized financial authorities. Legitimate platforms will have verifiable registration and oversight. ► Examine URLs Carefully Fake domains are often newly registered, resemble legitimate sites only superficially, or contain minor spelling differences. Hover over links before clicking and examine the domain’s integrity. ► Don’t Trust Faces Alone Deepfake videos may look real, but crystalline inspection — such as detecting unnatural eye movements, odd lip sync, or inconsistent lighting — can help spot AI forged content. ► Enable Security Tools Browser extensions, anti-phishing filters, and DNS filtering tools can block access to known malicious domains. Security platforms and SIEM solutions should integrate threat feeds to block suspicious URLs in real time. ► Report Fraud Quickly If you suspect a scam, report it to platform moderation teams and local law enforcement. Quick reporting can help reduce spread and assist victim recovery where possible. Conclusion: The Growing Threat of AI-Assisted Financial Fraud The Nomani investment scam exemplifies how cybercriminals are increasingly fusing social engineering with powerful AI tools to create more convincing, harder-to-detect fraud campaigns. With a 62% year-over-year increase in activity in 2025, enhanced deepfake ads on platforms like Facebook and YouTube are aiding criminals in targeting victims with refined techniques that evade traditional defences. While some progress has been made — such as reductions in detections in the latter half of 2025 due to law enforcement and platform takedown efforts — scams remain adaptable and persistent. Defense against such threats requires a combination of user education, proactive platform moderation, regulatory oversight, and advanced cybersecurity tools designed to detect AI-generated content and fraudulent investment schemes before they can cause significant financial harm. Post navigation New MacSync macOS Stealer Variant Uses Signed App to Bypass Apple Gatekeeper and Steal Data Attacks Are Evolving: 3 Ways to Protect Your Business in 2026