Loading......

Home > Featured-Content > For-LSPs > Industry-Data-and-Resources

leadership-image

Simon Yoxon-Grant

President, CEO LanguageLine Solutions

Having spent two decades with LanguageLine before stepping into a five-year role at our sister company, I returned in 2024 to find an industry where cycles of innovation, once spanning years, now turn with breathtaking speed. This shift is driven by technological advances that challenge every corner of our field.

We are at an inflection point. The decisions we make this year will echo for generations, shaping the trajectory of our industry and society. It is a sobering responsibility—one we must navigate with an unwavering ethical commitment.

Balancing Innovation with Purpose
Technology, particularly AI and machine learning, has revolutionized how we work. These tools allow us to deliver services more efficiently, cost-effectively, and at unprecedented scale. Yet, we must remain vigilant. The pursuit of progress should never come at the expense of accuracy, cultural sensitivity, or fairness. These are the pillars of trust on which our industry is built.

The challenge before us is not whether we can move fast (technology ensures that we will), but whether we can do so thoughtfully. Will we honor the humanity at the heart of our work? Will we remain an ethical industry?

The Slippery Slope of Compromise
Temptations will arise along the way. The lure of short-term gains has the potential to push us to compromise, inch by inch: a concession on privacy here, a slight erosion of quality there. But those inches add up. If we sacrifice too much of our value system, the reflections we write in 2025 will reveal an industry that is far less trusted and much more vulnerable.

Our industry thrives today not because we have pursued every opportunity at any cost, but because we have held firm to what matters: serving people with integrity, fostering understanding, and enabling connections in essential moments of need.

The soul of our industry must not be for sale: not now, not ever. By remaining true to our principles, we can chart a path forward that is both innovative and ethical, ensuring our impact endures for decades to come.

The world will be watching.

In recent years, we’ve seen a wave of new competitors in our industry that promise more for less. Some rely on linguists who are paid below sustainable rates. Others use AI trained on generic data with no oversight from professionals. These approaches are marketed as efficient. In practice, they introduce real risk.

Poorly trained or overextended linguists increase the chance of miscommunication. AI systems that lack cultural context or domain knowledge introduce errors that no software update can fix.

There is no margin for error in what we do, yet I’m concerned that the price pressure from these new business models threatens to pull our entire industry into a race to the bottom.

"Is your interpretation responsibly sourced?" I think we can all agree that this question should matter as much in our industry as it does for food, clothing, and building materials.

As language service providers, we have an opportunity to help clients understand the production process behind their language access, the people involved, and whether it meets their required standards.

Responsible sourcing means working with qualified professionals, investing in their training and well-being, and ensuring they have the tools to deliver consistent quality. It means building AI that is transparent, linguistically rigorous, and guided by human expertise at every step.

This moment is a test of what our industry values. It’s a test of our “why.”

We should all be thrilled about the potential for AI to extend our mission. It can take us further, faster toward a world in which language and cultural barriers no longer exist.

This should be – and can be – a race to the top. AI represents a tremendous opportunity if we harness it to elevate our industry’s standards rather than compromise them.



The linguist of the future will not be replaced by AI. With judgment at a premium, they will instead be elevated to a higher-trust role.
The question isn't whether AI will transform language access. It already has. It's whether we'll let it hollow out what matters most or use it to amplify our humanity.

This last year taught me something simple: the better machine language gets at precision and scale, the more obvious it becomes where human judgment and cultural intelligence are irreplaceable. Leaders, regulators, and technologists need to make a choice, and we need to make it now. AI's strength is concrete: pattern recognition, repeatability, relentless throughput. We can automate mechanical work and reduce latency for routine encounters. But we can't forget what language access has always been about. It's not just words. It's people. We listen when someone doesn't yet know how to say what they need. We allow for emotion. We translate context as much as content.

Imagine a mother describing her sick or injured child's symptoms. Fear overwhelms her and she switches mid-sentence to her regional dialect. A human interpreter hears a parent breaking down, recognizes the shift, and ensures the doctor understands what's being said. This is what we preserve when we design our systems right: AI takes on what machines do well and frees humans to do what we do best. This means building hybrid flows as our standard, not our fallback. We let models handle predictable, low-risk work. We require human review where stakes are high. We build automatic escalation paths as a first principle and measure what matters: time-to-escalation, error rates, and cognitive load on our interpreters. And we stop pretending technical metrics tell the whole story. Whether a patient felt heard, whether a cultural cue was missed: these aren't soft concerns. They're the point. We need to measure perceived empathy, comprehension, and user comfort with the same rigor we bring to accuracy and latency. We also need governance we can defend, transparency about training data, documented failure modes, and auditable logs. We should co-design with clinical and community partners to identify hidden risks and define where human judgment must prevail.

Finally, the future doesn't happen by accident. It requires deliberate investment in our people. AI creates new roles: interpreter-auditors, escalation analysts, hybrid engineers. We can build career pathways that recognize these skills as core to our profession.

When AI handles the mechanical labor, our work returns to what it has always been about: not finding the perfect word, but helping imperfect people find understanding in moments that matter most.