When Language Technology Promises Vacation Romances - But You Better Have a Backup Plan - Our Analysts' Insights
X

Our Analysts' Insights

Blogs & Events / Blog
12Apr

When Language Technology Promises Vacation Romances - But You Better Have a Backup Plan

Imagine yourself in a café in Paris or on a beach in Cancún, running into some gorgeous human specimen you just can’t help but approach. You walk up to the person, offer them a hearing device, and point for them to put it in their ear while you pop one in yourself. Then you launch an app on your smartphone and start communicating via the help of machine interpreting, hoping the app will accurately translate your best pick-up line. 


Source: Mymanu (Clik) 

It sounds far-fetched and not likely to result in many “and they lived happily ever after” stories. However, several companies are making the bet that such devices – earbuds paired with your smartphone and specialized apps for speech and machine interpreting – will catch on and are attracting quite a few investors to their crowdfunding campaigns. 

CSA Research talked to three companies in this budding market for “hearables” to assess the state of the offering. Aside from giving us a good laugh with some marketing campaigns, what are they seeking to achieve? As it turns out, the romance-oriented campaigns make sense. These companies focus mostly on early adopters of technology innovation – young to middle-aged males that live in large urban areas and like tech gadgets. Dreams of reaching other demographics and target market are not uncommon for these tech vendors, but they tend to realize the systems aren’t there yet. In short, earbuds powered with machine interpreting are – for now at least – fun gadgets that currently pose no threat to the language services industry or to the “it’s just lunch” dating services. In our briefings we learned that:  

  • The idea is great. Being able to communicate in real-time with somebody whose language you don’t speak is a concept with tremendous potential. This is what motivates investments. After all, buying earbuds would have an easy return on investment if you could eliminate the cost of human interpreters. But we’re not there yet, despite great advances.  
     
  • The results don’t impress. After all, three pretty complex technologies are in play − voice recognition, machine translation, and voice synthesis. All three can add noise to the process. While we were able to get the overall gist in the systems we were able to test, suitable use cases tend more toward fun, personal situations rather than anything business-oriented. 
     
  • The logistics need work. We identified unresolved issues when we interviewed various providers. First, these devices have limited usability unless both parties have one. That’s not likely to happen when devices cost a few hundred dollars each. Plus there’s no clear leader in this very nascent market, more options keep joining the fray, and as in any other new sector, there’s no agreement on specifications that will let devices from different manufacturers communicate with each other. The solution we heard from vendors is that they would ship devices as pairs so individuals could share them with others. But this solution leads to a second, bigger problem: Many people won’t want to insert a device in their ear that may have been in somebody else’s. Aural hygiene concerns are likely to limit the success of current earbuds and solutions that rely on everyone having their own expensive earbud are fundamentally unrealistic. 
     
  • The immediate future for these suppliers looks grim. The first wave of products coming to market will enable people to have some fun with interpreting. The second wave of devices might prove more interesting, but by then, we expect that companies like Apple and Samsung will have integrated such technology into their already widely used products, making the market for specialized devices less likely to succeed. However, the availability of APIs for the leading speech platforms such as Alexa, Cortana, and Siri could lead to a third wave of more viable, more accurate earbuds built on these increasingly powerful commercial solutions. 

Problems aside, there is some very interesting technology at play here. The power of enhanced earbuds is the fact that you pick up the sound at the tympanic membrane, which decreases the noise in voice synthesis. As a result, hearables outperform pocket translators and other wearable types such as watches and necklaces that suffer from voice distortion and interference from background and ambient sounds. 



Source: Waverly Labs (Pilot) 
 

Market Drivers 
It’s this audio clarity more so than the language function that drives the market for earbuds. All but one of the vendors we interviewed promote the multi-function aspect of their product, putting forward noise cancelling, music listening, or phone answering as the primary feature of the gadgets. As those functions creep into products from Bang & Olufsen, Beats, and Bose, they entice more people to buy high-end earbuds. This general-market acceptance could create a platform for interpreting via hearing devices by enabling better, clearer, less noisy communication through remote human interpreters, either in telephone or remote simultaneous interpreting (OPI and RSI). 

As tech developers look for the next big thing that comes after smartphones, wearables are high on the list, especially when you combine powerful earbuds with mixed or augmented reality glasses. Even if most of the current crop of enhanced earbud developers fail to deliver on their grand ambitions, their experience is driving a field that did not even exist two years ago. 

In the end, are makers of MI-powered earbud over-selling and under-delivering? Not necessarily. The language industry tends to set a higher bar for these gadgets than their own makers do: Developers may produce slick ads that promise peace, love, and friendship – or at least a whirlwind fling – but they also understand that people buy them for the cool factor so they are targeting their marketing efforts accordingly. Interpreting-capable earbuds are soon going to be a mainstream tool, although whether the current producers will manage to capitalize on this shift remains to be seen. Mass usage of the service is likely to take a while to gain speed, and will require developers to address the significant barriers we found.

About the Author

Hélène Pielmeier

Hélène Pielmeier

Director of LSP Service

Focuses on LSP business management, strategic planning, sales and marketing strategy and execution, project and vendor management, quality process development, and interpreting technologies

Related

Human Vs. AI Interpreting – a Real-Life Comparison

Human Vs. AI Interpreting – a Real-Life Comparison

For the last 10 years, I have written hundreds of pages of research on interpreting in its various f...

Read More >
The Global Enterprise Content Production Line

The Global Enterprise Content Production Line

In today’s interconnected world, a global enterprise’s success hinges on its ability to produce, r...

Read More >
Bigger Isn’t Better, Or Why FLLMs Matter

Bigger Isn’t Better, Or Why FLLMs Matter

In October 2023, we argued that the future of AI would be in “focused large language models” (FLLM...

Read More >
Automated Interpreting: How Far Have Implementations Come Along?

Automated Interpreting: How Far Have Implementations Come Along?

The topic of automation has taken the interpreting industry by storm. On the one hand, enthusiasts b...

Read More >
Localization Reinvention

Localization Reinvention

Back in the day when I first began working in localization, we didn’t have a translation management...

Read More >
Is Meaningful Language Access Possible with Automated Interpreting?

Is Meaningful Language Access Possible with Automated Interpreting?

The rising frequency of discussions about AI has led to much unease among interpreting service provi...

Read More >

Subscribe

Name

Categories

Follow Us on Twitter