Why brands building their own D2C sales chatbots can be a faceplant

AI
Chatbots
Ken Garff
,
Founder
Read Time: 0 Min
August 20, 2025

Here's a few samples and the key takeaways:

  • You're on the hook for bot screw-ups. Remember Air Canada? Their chatbot totally made up a refund policy, and when a passenger sued, the airline got stuck paying up. Plus, weeks of bad press!
  • Safety slip-ups are seriously scary. A New Zealand supermarket's AI recipe bot started dishing out recipes for stuff like "chlorine-gas cocktails." They had to yank it and issue warnings, completely torpedoing customer trust.
  • Hallucinations can cost you big bucks. Two lawyers got fined because ChatGPT just made up a bunch of fake court cases. Imagine if your bot lies about warranties or return policies – hello, lawsuits!
  • Building and maintaining these bots is a full-time job. You've got to worry about policy tuning, logging, human review, and even insurance. It's way easier to just let a specialist platform handle all that heavy lifting so you can actually focus on selling.

Cases Gone Wrong:

1) Building your own customer‑facing chatbot may feel empowering, but the liability lands squarely on you when it goes off‑script. Ask Air Canada: its DIY web bot “invented” a post‑flight bereavement‑fare refund that didn’t exist. When the passenger sued, the airline tried to argue the bot was a “separate legal entity,” yet the tribunal still ordered the company to pay the fare difference plus fees, reprimanding Air Canada for failing to police information on its own site. Beyond the refund, the case forced weeks of negative headlines and a public apology, exactly the kind of brand‑erosion most merchants can’t afford.

2) Content‑moderation slip‑ups get even scarier when safety is involved. New Zealand supermarket chain Pak’nSave rushed out an LLM‑powered “Savey Meal‑Bot”; within days, users coaxed it into serving recipes for chlorine‑gas cocktails, “poison bread sandwiches” and other lethal dishes. The grocer had to yank the tool and issue warnings explaining that no human had vetted the outputs. That single lapse jeopardized consumer trust the brand had built over decades, and it happened because the retailer lacked the tooling and expertise to implement robust guardrails, red‑team the prompts or monitor abuse in real time.

3) Even if your chatbot never endangers customers physically, hallucinations can trigger costly legal fallout. In 2023 two U.S. lawyers were fined after ChatGPT fabricated six nonexistent court precedents in Mata v. Avianca; the judge called the episode “unprecedented” and imposed sanctions to deter future misuse. For an e‑commerce merchant, a single false claim about warranties, return windows or regulatory compliance could invite class actions or regulator scrutiny. Building and maintaining the layers that prevent those errors—policy tuning, prompts, moderation layers, audit logging, human review, insurance coverage, is a full‑time job. Hand‑off that complexity to a specialist platform and you get battle‑tested moderation, continuous model evaluation and shared liability coverage, letting you focus on selling rather than managing an AI tool.

Footnotes:

  1. https://www.theguardian.com/world/2024/feb/16/air-canada-chatbot-lawsuit
  2. https://www.theregister.com/2023/08/11/supermarket_reins_in_ai_recipebot/
  3. https://legal-mag.com/legal-fictions-and-chatgpt-hallucinations-mata-v-avianca-and-generative-ai-in-the-courts/