What global voices are telling us about language, trust, and generative AI

Aimee Ansari Aimee Ansari Executive Director, CLEAR Global 22 Aug 2025 3 mins 3 mins
Three women raising hands up to the sunset
RWS recently released its Riding the AI Shockwave report, which explores how people around the world are responding to the rapid rise of generative AI. One of the most striking findings is that people everywhere increasingly expect service communications to be delivered in their own language.
 
That might sound obvious – but it matters. Being able to access information in the language you’re most comfortable with isn’t just more effective, it’s a matter of respect and dignity. When organizations communicate in people’s languages, they open the door to trust, inclusion, and real engagement.
 
But the report also revealed something else that stood out: the enthusiasm for generative AI in Africa and Asia.

A broad global lens

To build the report, RWS surveyed nearly 4,900 people across 14 countries and 20 languages – including some low-resource languages that are often overlooked in research of this kind. The focus group was largely business-to-business buyers – not necessarily the communities we often think of when discussing the social impact of technology. Yet, their perspectives provide valuable insights for everyone.
 
What we found was a savvy, forward-looking audience. Across Africa and Asia especially, respondents were already experimenting with generative AI, teaching themselves how to use it, and actively shaping their expectations of what it should deliver.

Building trust in generative AI

When asked what would help them trust generative AI, three themes stood out clearly:
  • Transparency: People want to know when they’re interacting with generative AI. Simply being told whether AI is being used was seen as critical.
  • Explainability: Respondents want to understand where the information comes from. Which sources? What data? Clarity here builds confidence.
  • Human involvement: Perhaps most importantly, they want to know how humans are engaged in developing, monitoring, and overseeing AI systems. The human role is seen as essential to making AI trustworthy.
When combined with the call for language inclusivity, these factors paint a clear picture: for generative AI to succeed and scale, it must be transparent, explainable, human-centered, and multilingual.

Implications beyond the commercial sector

While the research focused consumers of business services, the implications are just as relevant to the social impact sector, where questions of ethics, safety, and equitable access are top of mind. If generative AI is to make a real difference in people’s lives, we must go beyond innovation for its own sake and focus on building systems people can trust, in the languages they need.
 
That’s especially important when it comes to low-resource languages – often the very languages spoken in regions most eager to embrace AI. The message from RWS's research is clear: if we want generative AI to be used well, we must prioritize trust, transparency, human oversight, and language inclusivity from the start.
 
Aimee Ansari
Author

Aimee Ansari

Executive Director, CLEAR Global

Aimee brings over 20 years of experience in leadership positions in large humanitarian and development organizations. She has worked in several humanitarian crises from the Tajik civil war to the earthquake in Haiti, the conflicts in the Balkans to the Syrian refugee crisis and the conflict in South Sudan. Prior to joining Translators without Borders in 2016, Aimee worked with Care, Oxfam, Save the Children and the United Nations.

All from Aimee Ansari