NEW New report: Invalid click rate benchmarks of 85,000 Google accountsCompare yours here
New report: Invalid Click Rate Benchmarks

AI Fails in Advertising: 8 Shocking Examples Brands Regret

Artificial Intelligence is moving fast, but that doesn’t mean it’s ready to lead your next product launch or run your marketing department. We’ve reviewed dozens of AI-driven campaigns, and plenty of them got it wrong; publicly, and sometimes painfully.

Below are some of the most telling failures. Learn what went wrong so your brand doesn’t make the same mistake.

Trump’s AI-generated photos caused controversy

In 2025, President-elect Donald Trump shared an AI-generated image depicting himself dressed as the pope, shortly after Pope Francis’s death. It was criticized by Catholic leaders and organizations as disrespectful during a period of mourning. 

The New York State Catholic Conference said, “We just buried our beloved Pope Francis and the cardinals are about to enter a solemn conclave to elect a new successor of St. Peter. Do not mock us.”

Separately, AI-generated images of Trump with Black supporters circulated in 2024, seemingly designed to portray him as popular among the demographic. Critics accused the images of being misleading and manipulative.

Lesson: AI-generated images can offend or mislead your audience. Use them thoughtfully to avoid backlash and maintain trust.

Mango’s fake AI models shake customer trust

Fashion Brand Mango tried what virtually every business does today – using generative AI to speed up content production. CEO Toni Ruiz called itfaster content creation.” But customers weren’t impressed because of how the company used it.

This raised concerns among shoppers as they pointed out that if the models and clothes aren’t real, the brand’s selling promise falls apart. One comment summed it up: “If the clothes and the women who swear them don’t exist, then what are they really selling”

Others called out poor sizing and fit. AI-generated models tend to lack realistic body proportions, which makes it harder for shoppers to judge fit. For fashion, that’s a problem.

Lesson: Don’t lose customer trust for the sake of efficiency. If your product relies on visual clarity or fit, AI-generated images may do more harm than good.

DoNotPay’s AI legal bot collapses under scrutiny

DoNotPay.com marketed its chatbot as “the world’s first robot lawyer.” The company promised customers help with legal tasks, everything from contesting tickets to filing immigration paperwork.

But tests by Legal Cheek and others showed the bot couldn’t answer basic legal questions. At best, it worked for simple, low-risk tasks. At worst, it gave false confidence on serious legal research and complex issues.

In 2024, the FTC hit DoNotPay with a $193,000 fine for deceptive marketing.

Lesson: Tools that handles critical services, especially legal or financial, must be accurate and tested; good AI training data isn’t enough. Marketing won’t save a company when regulators and customers start asking questions. 

CocaCola’s AI-generated holiday ads were rejected

Coca-Cola is known for iconic ads like the “I’d like to buy the world a Coke” campaign in 1971. So when it dropped a fully AI-generated campaign in 2024, expectations were high. 

The company called the campaign “a collaboration of human storytellers and the power of generative AI.” But, many audiences thought the ad was low effort attempt, and a sneaky way to avoid paying real artists, especially from a brand like Coca Cola.

Even Alex Hirsch, creator of Gravity Falls, weighed in, mocking the ad and implying Coca-Cola’s iconic red came from “the blood of out-of-work artists.

Lesson: Using Artificial Intelligence to create isn’t an issue. But businesses can’t get a pass when they replace real human creativity with tech. AI can support, but should not substitute your creative vision.

Read more: Top Controversial Ads: Marketing Lessons and Mistakes to Avoid

AI marketing “over-sells” Willy’s Chocolate Experience

In 2024, a Willy Wonka-themed event in Glasgow sold over 800 tickets at £35 per, based on colorful, AI-generated images promising a magical chocolate world. What guests got instead was a dim-lit warehouse, zero candy, and actors reading from loose scripts.

The backlash was immediate. Parents were furious. Social media posts went viral. The organizer issued an apology and promised refunds to attendees.

Lesson: If your AI-generated marketing over-sells what you actually offer, expect public backlash. Sell what’s real or deal with the consequences.

Google’s “Dear Sydney” Olympic Ad falls flat

The message was simple enough; a father asking Gemini to help his daughter write a heartfelt letter to Olympic athlete Sydney McLaughlin-Levrone. 

The response was overwhelmingly negative. Viewers criticized the replacement of genuine human expression with scripted AI emotion. One user wrote, “it completely negates why someone would write a letter to an athlete or anyone for that matter.”

Google removed the ad from its Olympic rotation. It remains on YouTube, with comments disabled.

Lesson: Emotional marketing fails immediately when it feels artificial. Use AI to assist, not replace real human moments.

Artisan’s “Stop Hiring Humans” billboard sparks backlash

In late 2024, Artisan launched a billboard campaign promoting its AI agents. The message? “Stop Hiring Humans.” Other slogans included:

  • Artisans don’t complain about work-life balance
  • Artisans won’t come into work hung over
  • Artisans are excited to work 70+ hours a week

The public response was swift and angry. Critics called it dystopian. Billboards were vandalized. One commenter summed it up as “Smugly evil.”

Artisan’s CEO defended the move, saying their messaging “are somewhat dystopian, but so is Artificial Intelligence. The way the world works is changing.

Lesson: Edgy ads can create noise but if your message devalues people, backlash can outweigh the buzz and paint your business in a negative light.

Google’s AI Overview provides wildly inaccurate answers

Google’s AI Overview launched in May 2024 with big ambitions: to expand what you can do with a Google search. According to CEO Sundar Pichar, it “expands the types of questions people feel confident asking.”

Unfortunately, it provided questionable, and even dangerous answers in the beginning. Some of them include: 

  • Telling users they could use glue to make pizza more sticky 
  • Recommending eating rocks for nutrition
  • Verifying that a dog has played in the NHL
  • Recommending a bath with a toaster as a great way to unwind

The unexpected responses left people to wonder how Google could manage to make AI mistakes like these.

Today, the tool has gotten much better, and the results are now labeled as “experimental.” But for a product integrated into one of the world’s most-used platforms, the early missteps were hard to ignore.

Lesson: Launches based on untested artificial intelligence can harm your brand trust. Don’t deploy at scale without safeguards, especially if millions will see the results.

Avoid costly marketing fails with Fraud Blocker

We’ve seen how AI can tank even the best-orchestrated campaigns. But it’s not the only threat to your business’ marketing. 

Click Fraud costs marketers billions annually ($84 Billion in 2023 alone). It not only drains your marketing budget, it can also skew your campaign data, making it harder to measure true performance. 

That’s where Fraud Blocker can help. Our solution monitors your ads in real time, ensuring that they only reach real users, not bots or click farms. Whether you’re using AI to improve your campaigns or not, protecting your ad spend is non-negotiable. 

Start your 7-day free trial and see how much real value you can gain from your campaigns.

Facebook
Twitter
LinkedIn

ABOUT THE AUTHOR

Matthew Iyiola

Matthew is the resident content marketing expert at Fraud Blocker with several years of experience writing about ad fraud. When he’s not producing killer content, you can find him working out or walking his dogs.

More from Fraud Blocker