
Google is doubling down on AI to fight AI. In its latest Ads Safety Report, the company outlines how Gemini-powered systems are now central to detecting and blocking malicious advertising at scale, especially as bad actors increasingly use generative AI to produce deceptive campaigns.
This article explores how Google’s evolving enforcement approach works, what the latest numbers reveal about the state of ad fraud, and what it all means for marketers navigating brand safety, compliance, and performance in an AI-saturated ecosystem.
Short on time?
Here’s a table of contents for quick access:
- How Google is using Gemini to block harmful ads at scale
- What the 2025 Ads Safety Report reveals about ad fraud trends
- Why AI-powered enforcement is becoming a competitive advantage
- What marketers should know about brand safety in an AI-first ad ecosystem

How Google is using Gemini to block harmful ads at scale
Google’s latest update centers on Gemini, its advanced AI model, now embedded into ad safety systems. Unlike earlier detection methods that relied heavily on keywords or rule-based filtering, Gemini analyzes intent across a wide range of signals, including account behavior, campaign patterns, and contextual cues.
This shift allows Google to move from reactive moderation to proactive prevention. According to the report, over 99% of policy-violating ads were blocked before they were ever served in 2025.
The scale is significant. Google blocked or removed more than 8.3 billion ads and suspended 24.9 million advertiser accounts over the year. These actions included 602 million scam-related ads and 4 million accounts tied to fraudulent activity.
Gemini also plays a role in real-time enforcement. Most Responsive Search Ads are now reviewed instantly at submission, meaning harmful content can be stopped before it even enters the auction system.

What the 2025 Ads Safety Report reveals about ad fraud trends
The numbers point to a clear trend: ad fraud is not just growing, it is evolving.
Bad actors are increasingly using generative AI to produce large volumes of deceptive ads, making traditional detection methods less effective. This includes scams across high-risk categories such as financial services, healthcare, and gambling, where misleading claims can have serious consequences.
At the same time, enforcement is expanding beyond ads themselves. Google took action on more than 480 million web pages for policy violations, with issues ranging from harmful content to misrepresentation.
User behavior is also becoming a more important signal. Google processed more than four times as many user reports in 2025 compared to the previous year, using that feedback to accelerate enforcement cycles.
Regionally, Southeast Asia is seeing increased scrutiny. Regulators and platforms are tightening controls as AI-generated advertising becomes more common, raising concerns about transparency and consumer protection.

Why AI-powered enforcement is becoming a competitive advantage
Beyond safety, Google’s approach highlights a less obvious shift: enforcement accuracy is now a performance lever.
By improving detection precision, Google reduced incorrect advertiser suspensions by 80%. This matters because false positives can disrupt legitimate campaigns, waste budgets, and damage trust between platforms and advertisers.
Advertiser verification is another key layer. By validating identities before ads go live, Google is trying to stop bad actors at the entry point, not just during campaign execution.
For marketers, this signals a broader platform strategy. Trust and compliance are no longer just policy concerns, they are part of the product experience. Platforms that can maintain a clean ecosystem while minimizing friction for legitimate advertisers will have a clear advantage.

What marketers should know about brand safety in an AI-first ad ecosystem
As AI reshapes both ad creation and enforcement, marketers need to rethink how they approach brand safety and campaign strategy.
Here are a few practical takeaways:
1. Expect stricter pre-launch scrutiny
Real-time review at submission means campaigns need to be compliant from the start. There is less room for iteration with borderline messaging.
2. Invest in transparent creative and claims
As models get better at detecting intent, vague or misleading messaging is more likely to be flagged, especially in regulated industries.
3. Monitor platform-level signals, not just campaign metrics
Enforcement actions, account health, and policy updates are becoming just as important as CTR or ROAS.
4. Prepare for AI vs AI dynamics
If bad actors are using generative AI to scale scams, platforms will respond with more aggressive AI detection. This can create edge cases where legitimate ads are caught in the crossfire.
5. Diversify risk across platforms
With enforcement tightening, relying too heavily on a single ad ecosystem increases exposure to sudden disruptions.

Google’s Ads Safety Report makes one thing clear: the battle between AI-generated threats and AI-powered defenses is accelerating.
For marketers, this is not just a platform update. It is a shift in how digital advertising operates at a foundational level. Compliance, transparency, and adaptability are becoming core competencies, not afterthoughts.
As enforcement systems grow more sophisticated, the brands that succeed will be the ones that align with them early, not react after the fact.


Leave a Reply