The Hidden Costs of English-Only AI: Lost Sales, Broken Brands, and Risk to Human Lives


Track: Global Business | GB5 |   Everyone |
Thursday, October 16, 2025, 9:00am – 9:30am
Held in: Steinbeck 2
Presenters:
Fabiano Cid - Powerling 
Katrina Montinola - 2GO Advisory Group
Host: Donna Parrish

Most AI models perform best in English, with quality often declining in other languages. This is more than a minor issue: it can lead to lost revenue, damage brands, and, in healthcare settings, even put people at risk. The underlying problem is bias: trained predominantly in English, models internalize those patterns as the norm and replicate them—misinterpreting intent, sounding unnatural, and at times providing unsafe recommendations. In practice, this breaks onboarding and support processes, misreads customer queries, and generates content that fails to fit local languages or cultures. In high-stakes work, models may misunderstand common phrases that have clinical significance. The causes are clear: insufficient and lower-quality data in many languages, weak cultural adaptation, and English-focused evaluation. As professionals working across languages daily, we notice these shortcomings early. Drawing on recent industry research, this talk sparks an essential question: are models simply reflecting our biased world, or are they amplifying those biases? When it comes to inputs, examples, testing, or model selection, what changes could genuinely improve outcomes without slowing progress? And, crucially, whose names, stories, and languages must be included to ensure the burdens described in the title don’t fall on our investors, customers, or patients?

Key Takeaways:
  • Understand how English-only AI quietly impacts business performance, brand perception, and even patient safety.
  • Learn to recognize the hidden ways bias shows up in multilingual content, customer journeys, and product experiences through real-world examples and open discussion.
  • Find out how language professionals can use their unique perspective to detect risks early, influence AI testing and data practices, and help organizations build technology that works— and sounds — right for every market.