
Global brands are moving fast on generative AI, but the rulebook is still catching up. New research from the World Federation of Advertisers (WFA) shows that while AI is now embedded in marketing workflows, most companies are still unsure how transparent they should be with consumers.
This article explores why AI disclosure is becoming a strategic issue for marketers, what the new WFA guidance actually covers, and how brands should think about transparency as both a compliance and trust-building lever.
Short on time?
Here’s a table of contents for quick access:
- Why global brands are calling for clearer AI disclosure rules
- What the WFA guidance says about AI-generated marketing content
- Why transparency is becoming a brand risk and trust issue
- What marketers should know about AI disclosure strategy

Why global brands are calling for clearer AI disclosure rules
Nearly four in five global brands are already using AI-generated or AI-enhanced content in consumer-facing marketing. But despite this rapid adoption, clarity around disclosure is still missing.
According to WFA research covering 27 multinational brands with a combined US$71 billion in ad spend, 78% are actively using AI in marketing. Yet 80% are calling for clearer global guidance on when and how to disclose that use.
The friction points are consistent:
- 61% cite unclear or fragmented regulations
- 46% are unsure about consumer expectations
- 39% point to a lack of industry best practice
Even though 67% of brands have already built internal AI policies, the absence of standardized guidance is creating inconsistency across markets and channels.
In short, brands are scaling AI faster than governance frameworks can keep up.

What the WFA guidance says about AI-generated marketing content
To address this gap, the WFA has introduced voluntary best-practice guidance in collaboration with the International Council for Advertising Self-Regulation (ICAS).
The framework focuses on helping marketers decide when disclosure is necessary and when it is not. It categorizes AI usage into five key areas:
- People and likeness
- Product images
- Audio
- Background visuals
- Marketing copy
The guidance emphasizes context over blanket rules. For example:
- 96% of brands believe AI-generated voices that could be mistaken for humans should be disclosed
- 91% say synthetic humans in prominent roles should be labeled
- Only 4% think decorative AI-generated backgrounds require disclosure
This signals a shift toward risk-based disclosure rather than universal labeling.
The framework also warns against misleading uses of AI, such as exaggerating product results or fabricating endorsements. These are not just ethical concerns but potential regulatory flashpoints.

Why transparency is becoming a brand risk and trust issue
Transparency is no longer just a compliance checkbox. It is directly tied to brand reputation and consumer trust.
- 82% of brands say transparency is essential for protecting brand reputation
- 79% say it is critical for maintaining consumer trust
This matters more in regions like APAC, where consumers are increasingly AI-aware. Many can already identify low-quality AI-generated content, and trust remains low despite high usage.
At the same time, regulatory pressure is building:
- The EU AI Act will require deepfake labeling from August 2026
- Markets like California and China are introducing their own disclosure rules
- Platforms like Meta, Google, and TikTok are acting as de facto regulators with their own policies
The result is a fragmented landscape where brands must navigate legal requirements, platform rules, and consumer expectations simultaneously.
Over-disclosure can also backfire. Too many labels may confuse users or dilute impact, especially if applied inconsistently.

What marketers should know about AI disclosure strategy
For marketers, this is less about compliance checklists and more about strategic positioning. Here are practical ways to approach AI disclosure:
1. Treat disclosure as part of brand experience
Transparency should feel intentional, not reactive. Decide how AI labeling aligns with your brand voice and customer expectations.
2. Prioritize high-risk scenarios
Focus disclosure where it matters most:
- AI-generated people or voices
- Content that could mislead or impersonate
- Claims tied to product performance
Lower-risk elements like backgrounds or minor enhancements may not need the same treatment.
3. Build internal guidelines that scale globally
Given regulatory fragmentation, create flexible frameworks that can adapt across regions without breaking consistency.
4. Balance transparency with simplicity
Avoid overloading users with technical disclosures. The goal is clarity, not confusion.
5. Invest in quality, not just automation
With consumers increasingly able to spot low-quality AI content, execution quality becomes a trust signal in itself.
AI is now a core part of marketing production, but disclosure standards are still evolving. The WFA’s guidance is an early attempt to bring structure to a fast-moving space, but it also highlights a deeper shift.
Transparency is becoming a competitive differentiator. Brands that get it right can build trust while scaling AI-driven efficiency. Those that do not risk regulatory scrutiny and consumer backlash.
For marketers, the real challenge is not whether to disclose AI use. It is how to do it in a way that strengthens credibility rather than undermines it.


Leave a Reply