Access Business Analytics
Listen to This Blog
Some people feel that using artificial intelligence (AI) to interpret human speech is a curse because individuals using the service could suffer harm due to mistranslations or because interpreters might lose their livelihood. Others embrace AI for all the possibilities it creates, first and foremost the ability to offer language access on a much greater scale and at a reduced cost.
So which camp is right? Like so many techno-ethical questions, there is no right or wrong. It really depends on the scenario.
Imagine you work at a power company and want to notify consumers of a power outage with a voice message. Time matters – in the best case, the outage will be resolved before you receive recorded messages from interpreters. In the worst case, the failure to notify affected people results in a serious injury or worse. In this situation, the need for speed is far greater than perfection. If the message adequately conveys the fact that the power is out and it’s made available immediately, consumers will know there’s a problem and that your company is working on it.
Consider a different scenario. You are a medical doctor who must deliver bad news to a patient with limited English proficiency. In this case, it could make sense to wait a few minutes or hours or even days until you can get a professional interpreter to convey the diagnosis with all its nuance, details, and with a matching tone of voice – and who will be able to work with you, the physician, to answer all the questions the doomed patient has. AI on the other hand, would not be a good proxy for a professional interpreter in this instance. It has accuracy and tonality limitations that would make it a poor candidate to do the job well. It would likely make patients feel like they didn’t get all the information they need. And it would probably make for a less satisfying dialogue.
These two examples are opposing endpoints on a very broad spectrum of scenarios that present varying degrees of associated risk, urgency, domain-specific terminology and jargon, and many more concerns to the participants.
The big question is to figure out when AI-based automated interpreting is safe and appropriate, and when humans will be your best solution.
In May 2023, 10 influential figures in the world of interpreting identified the need for an organization to advocate for fair and ethical AI in interpreting. Their “fair and ethical” concern addresses that broad range of concerns we raised earlier – when does automated interpreting make sense, and when does it not. This original “spark” group recruited volunteers to organize and lead the effort. By August 2023, the Interpreting SAFE-AI Task Force was formed to ensure the responsible use of AI.
This new Task Force commissioned CSA Research to develop, run, and analyze a large-scale perception study of users, buyers, and providers of interpreting services and technology. The study’s goal is to capture current perceptions of AI for interpreting well enough to eventually assist the Task Force in writing guidelines on recommended usage. We need your help to recruit participants to the study:
As of today, 814 interpreters have already participated in the survey. That’s enough of a survey sample to give you a teaser of what is to come:
Do these numbers mean that AI interpreting is not yet ready for prime time? It’s really a question of use case and who is responding. We expect perspectives to vary greatly by user group – technology vendors are likely to be more optimistic while we expect that end-users could fall between tech vendors and interpreters in terms of their expectations. Stand by – the survey responses will tell. And if you’re in any of the consumer or provider groups, we invite you to share your perceptions in the survey.
The CSA Research report will be made available to the public thanks to the efforts of the Interpreting SAFE-AI Task Force and donors who funded the research.
It’s vital to capture different perspectives from a variety of users and decision-makers. This survey is not about just the interpreter perspective. Of course, they are the first affected by technology that has the potential to complement or replace them. But at the end of the day, what we are looking for is what’s best for the consumers who need interpreting to do their job or be able to interact in situations where they don’t speak the language. And the answer will in no way be all for or all against AI – but it will be about using it where the benefits outweigh the risks.
Director of LSP Service
Focuses on LSP business management, strategic planning, sales and marketing strategy and execution, project and vendor management, quality process development, and interpreting technologies
It Depends As your organization pivots toward integrating generative AI (GenAI) into more of its ...
In October 2023, we argued that the future of AI would be in “focused large language models” (FLLM...
The topic of automation has taken the interpreting industry by storm. On the one hand, enthusiasts b...
Back in the day when I first began working in localization, we didn’t have a translation management...
When friends and family hear what I’m working on these days, they typically ask: 1) won’t AI elimi...
The rising frequency of discussions about AI has led to much unease among interpreting service provi...
Posts by CSA_Research