NEW New feature: Verify and block fake emailsLearn more
NEW New feature: Verify & block fake emails

12 AI Fails in Advertising (2026 Update): Examples and Lessons

Artificial Intelligence is moving fast, but that doesn’t mean it’s ready to lead your next product launch or run your marketing department. We’ve reviewed dozens of AI-driven campaigns, and plenty of them got it wrong; publicly, and sometimes painfully.

Below are some of the most telling failures over the past 3 years. Learn what went wrong so your brand doesn’t make the same mistake.

The 2026 Winter Olympics opening ceremony raised serious AI concerns

The Milan Cortina 2026 Winter Olympics opening ceremony featured an animated sequence showing actress Sabrina Impacciatore skiing and skating through a century of Olympic host cities. 

The visuals drew immediate suspicion online and many viewers convinced the sequence was AI-generated. Fans criticised the segment for potentially taking a job from a human animator.

The Games’ promotional graphics also raised eyebrows when they appeared to closely mirror the style of Japanese miniatures artist Tatsuya Tanaka. No credit was given, and Tanaka had no known involvement, though the IOC has not publicly confirmed or denied the use of AI in these materials.

Further inconsistencies fueled the speculation:

  • Athletes’ national flags appeared to be clumsily photoshopped onto uniforms after the fact
  • Several graphics showed the Olympic rings rendered incorrectly, seemingly breaking the Games’ own brand guidelines

Lesson: Audiences react negatively to AI standing in for real creative talent, especially when that AI appears to copy a living artist’s style.

AI.com spent $85 million on a Super Bowl ad and crashed on launch (2026)

AI.com made one of the boldest bets of Super Bowl LX: $70 million on the domain name alone, plus another $15 million on airtime. The site crashed almost immediately after the ad aired as millions of viewers tried to sign up at once.

But the bigger problem was the message. The ad was flashy and vague, leaving most viewers asking, “what is this about?”

The Kellogg School of Management gave it a failing grade, stating: “AI.com was a super confusing spot because it wasn’t clear what the product is, or why you should go and use it.” 

$85 million spent, and viewers couldn’t understand what they were being sold, let alone follow through on the call to action.

Lesson: A big marketing moment means nothing if your product can’t handle the attention, and even less if nobody understands what you’re selling.

Meta's AI auto-edited a top-performing ad without permission (2025)

Meta’s Advantage+ feature is designed to help advertisers get more out of their campaigns by automatically optimizing creative. But for some advertisers, the tool went further than expected, swapping out elements of their ads entirely without approval.

In one widely shared example, a marketer discovered Meta’s AI had replaced the creative in a top-performing ad with an AI generated elderly woman. Before the incident, marketers were already looking for steps to turn off Meta Advantage+.  But this incident reinforced growing concerns: AI changes don’t always align with your brand.

Meta has not publicly addressed the specific complaints.

Lesson: AI automation tools that override advertiser intent can be a liability. Especially when they are powered by LLM systems prone to hallucinations and errors.

McDonald's pulled its AI Christmas ad after just three days (2025)

McDonald’s Netherlands released a 45-second AI-generated holiday ad titled “The Most Terrible Time of the Year.” It was a darkly comic take on Christmas chaos, inviting viewers to hide out at McDonald’s until the holidays were over.

Audiences were not amused. Users on X called it “unsettling,” “creepy,” “poorly edited” and “inauthentic,” pointing to the uncanny valley feel of the AI-generated characters. The concept itself drew equal criticism. As one user put it: “Regardless of being made with AI, this is just a bad idea for a commercial.” The ad was pulled within three days.

Lesson: Negativity around a beloved occasion like Christmas rarely lands, and pairing it with AI, a technology audiences already view with suspicion, can worsen reception.

Trump’s AI-generated photos caused controversy (2025)

In 2025, President-elect Donald Trump shared an AI-generated image depicting himself dressed as the pope, shortly after Pope Francis’s death. It was criticized by Catholic leaders and organizations as disrespectful during a period of mourning. 

The New York State Catholic Conference said, “We just buried our beloved Pope Francis and the cardinals are about to enter a solemn conclave to elect a new successor of St. Peter. Do not mock us.”

Separately, AI-generated images of Trump with Black supporters circulated in 2024, seemingly designed to portray him as popular among the demographic. Critics accused the images of being misleading and manipulative.

Lesson: AI-generated images can offend or mislead your audience. Use them thoughtfully to avoid backlash and maintain trust.

Mango’s fake AI models shake customer trust (2024)

Fashion Brand Mango tried what virtually every business does today – using generative AI to speed up content production. CEO Toni Ruiz called itfaster content creation.” But customers weren’t impressed because of how the company used it.

This raised concerns among shoppers as they pointed out that if the models and clothes aren’t real, the brand’s selling promise falls apart. One comment summed it up: “If the clothes and the women who swear them don’t exist, then what are they really selling”

Others called out poor sizing and fit. AI-generated models tend to lack realistic body proportions, which makes it harder for shoppers to judge fit. For fashion, that’s a problem.

Lesson: Don’t lose customer trust for the sake of efficiency. If your product relies on visual clarity or fit, AI-generated images may do more harm than good.

DoNotPay’s AI legal bot collapses under scrutiny (2024)

DoNotPay.com marketed its chatbot as “the world’s first robot lawyer.” The company promised customers help with legal tasks, everything from contesting tickets to filing immigration paperwork.

But tests by Legal Cheek and others showed the bot couldn’t answer basic legal questions. At best, it worked for simple, low-risk tasks. At worst, it gave false confidence on serious legal research and complex issues.

In 2024, the FTC hit DoNotPay with a $193,000 fine for deceptive marketing.

Lesson: Tools that handles critical services, especially legal or financial, must be accurate and tested; good AI training data isn’t enough. Marketing won’t save a company when regulators and customers start asking questions. 

Coca-Cola’s AI-generated holiday ads were rejected (2024)

Coca-Cola is known for iconic ads like the “I’d like to buy the world a Coke” campaign in 1971. So when it dropped a fully AI-generated campaign in 2024, expectations were high. 

The company called the campaign “a collaboration of human storytellers and the power of generative AI.” But, many audiences thought the ad was low effort attempt, and a sneaky way to avoid paying real artists, especially from a brand like Coca Cola.

Even Alex Hirsch, creator of Gravity Falls, weighed in, mocking the ad and implying Coca-Cola’s iconic red came from “the blood of out-of-work artists.

Update: Coca-Cola just dropped their 2025 holiday ad. It looks much better and doesn’t have the same backlash.

Lesson: Using Artificial Intelligence to create isn’t an issue. But businesses can’t get a pass when they replace real human creativity with tech. AI can support, but should not substitute your creative vision.

Read more: Top Controversial Ads: Marketing Lessons and Mistakes to Avoid

AI marketing “over-sells” Willy’s Chocolate Experience (2024)

In 2024, a Willy Wonka-themed event in Glasgow sold over 800 tickets at £35 per, based on colorful, AI-generated images promising a magical chocolate world. What guests got instead was a dim-lit warehouse, zero candy, and actors reading from loose scripts.

The backlash was immediate. Parents were furious. Social media posts went viral. The organizer issued an apology and promised refunds to attendees.

Lesson: If your AI-generated marketing over-sells what you actually offer, expect public backlash. Sell what’s real or deal with the consequences.

Google’s “Dear Sydney” Olympic Ad falls flat (2024)

The message was simple enough; a father asking Gemini to help his daughter write a heartfelt letter to Olympic athlete Sydney McLaughlin-Levrone. 

The response was overwhelmingly negative. Viewers criticized the replacement of genuine human expression with scripted AI emotion. One user wrote, “it completely negates why someone would write a letter to an athlete or anyone for that matter.”

Google removed the ad from its Olympic rotation. It remains on YouTube, with comments disabled.

Lesson: Emotional marketing fails immediately when it feels artificial. Use AI to assist, not replace real human moments.

Artisan’s “Stop Hiring Humans” billboard sparks backlash (2024)

In late 2024, Artisan launched a billboard campaign promoting its AI agents. The message? “Stop Hiring Humans.” Other slogans included:

  • Artisans don’t complain about work-life balance
  • Artisans won’t come into work hung over
  • Artisans are excited to work 70+ hours a week

The public response was swift and angry. Critics called it dystopian. Billboards were vandalized. One commenter summed it up as “Smugly evil.”

Artisan’s CEO defended the move, saying their messaging “are somewhat dystopian, but so is Artificial Intelligence. The way the world works is changing.

Lesson: Edgy ads can create noise but if your message devalues people, backlash can outweigh the buzz and paint your business in a negative light.

Google’s AI Overview provides wildly inaccurate answers (2024)

Google’s AI Overview launched in May 2024 with big ambitions: to expand what you can do with a Google search. According to CEO Sundar Pichar, it “expands the types of questions people feel confident asking.”

Unfortunately, it provided questionable, and even dangerous answers in the beginning. Some of them include: 

  • Telling users they could use glue to make pizza more sticky 
  • Recommending eating rocks for nutrition
  • Verifying that a dog has played in the NHL
  • Recommending a bath with a toaster as a great way to unwind

The unexpected responses left people to wonder how Google could manage to make AI mistakes like these.

Today, the tool has gotten much better, and the results are now labeled as “experimental.” But for a product integrated into one of the world’s most-used platforms, the early missteps were hard to ignore.

Lesson: Launches based on untested artificial intelligence can harm your brand trust. Don’t deploy at scale without safeguards, especially if millions will see the results.

Avoid costly marketing fails with Fraud Blocker

We’ve seen how AI can tank even the best-orchestrated campaigns. But it’s not the only threat to your business’ marketing. 

Click Fraud costs marketers billions annually ($84 Billion in 2023 alone). It not only drains your marketing budget, it can also skew your campaign data, making it harder to measure true performance. 

That’s where Fraud Blocker can help. Our solution monitors your ads in real time, ensuring that they only reach real users, not bots or click farms. Whether you’re using AI to improve your campaigns or not, protecting your ad spend is non-negotiable. 

Start your 7-day free trial and see how much real value you can gain from your campaigns.

Facebook
Twitter
LinkedIn
matthew Iyiola - click fraud specialist

ABOUT THE AUTHOR

Matthew Iyiola

Matthew is the resident content marketing expert at Fraud Blocker with several years of experience writing about ad fraud. When he’s not producing killer content, you can find him working out or walking his dogs.

More from Fraud Blocker