Localization Demands Morph to Support Mobility, Speech, and Intelligence - Our Analysts' Insights
X

Our Analysts' Insights

Blogs & Events / Blog
08Mar

Localization Demands Morph to Support Mobility, Speech, and Intelligence

While often complex and costly, localization is a well-established practice at many companies. CSA Research's interviews and surveys with both Global 3000 companies and language service providers show that the best of these organizations have tamed the rhythm of localization – processes and schedules are understood and under control. Many plan to throttle back their localization budgets as they work to optimize their current processes and tools for the 10 or 20 languages they support before adding any more. But new devices enabling greater mobility and ubiquity are raising expectations for experiences localized into many more languages, for many more devices. 

What's driving this change? An assortment of mobile, handheld, wearable, hygienic, prosthetic, and cosmetic devices have begun to create growth markets. Last year the ITU reported that mobile-broadband networks are within reach of 84% of people on the planet. It predicted that the total number of mobile-broadband subscriptions would hit 3.6 billion by the end of 2016. The Internet of Things promises to connect even more of the objects in our universe, both with humans and from machine to machine. These new devices bear witness to – and participate in – commercial transactions, life experiences, and every manner of personal and business activity.

CSA Research finds that three attributes of these increasingly ubiquitous mobile, wearable, IoT, and other devices require a fundamental change in how we design interfaces and experiences: 

  1. Proximity. Mobile phones and wearables are constant companions, always on and at hand, on wrist, on head, or in ear. Studies of cellphone usage provide a sense of what this on-the-body experience means – hundreds, if not thousands, of touches, glances, or swipes per day. While such interactions are addictive for some, the upside is that these devices go beyond the PC's promise of information at your fingertips. Instead, they stream selected or unfiltered information to handhelds, wearables, sensors, and embedded devices.
     
  2. Immediacy. The urgent chirps and vibrations that alert us to events have a much greater urgency when they originate in our pockets or in our ears rather than from a PC. Brain research has shown that people react to text message pings with releases of dopamine, a chemical responsible for feelings of pleasure and euphoria. "People find themselves so drawn to their devices that elements of addiction are at play," said Dr. David Greenfield, a clinical professor of psychiatry at the Center for Internet and Technology Addiction. "If that desire for a dopamine fix leads us to check our phones while we're driving, a simple text can turn deadly." While that research relates to driving while texting, we all react to that same insistent ping from our devices in boardrooms, bars, and boudoirs.
     
  3. Intimacy. Wearables and mobiles find their way into our most personal moments as we rely on them to provide ubiquitous connectivity and computing. One usage study discovered that 87% of participants brought their phones out of sleep mode between midnight and 5am at least once during the sample period. Throughout the day and night, we call upon an assortment of apps, bots, and services to manage our interpersonal, medical, travel, and recreational activity.

These three characteristics of close-at-hand devices require more contextual awareness, which can most effectively be communicated through a nuanced interface in the user's language and adapted to local markets. They also validate the long-held CSA Research contention that consumers prefer native "comfort-language content" when sitting around in their pajamas. They ultimately even chip away at the professed tolerance for foreign-language content that businesspeople have when engaging with websites not localized for their country. Successful companies will elevate the business requirement to accurately and pervasively localize the customer experience. 

The demands of these constant electronic companions don't end with written native-language support. Spoken language will ascend to a role as the most natural interface between humans and computers. While we tap on our watch to respond, we would be much more comfortable talking to it. The microphone and speakers emerge as the most natural human-computer interface (HCI) – and most people will want to converse in their everyday language using Alexa, Cortana, Siri, or their phone's version of a virtual assistant. Of course, the spoken word isn't appropriate everywhere, so other HCI options include haptic and taptic interfacesfinger-writing on dashboard screens in luxury autos, and a gesture-driven model reminiscent of the one used by Tom Cruise in "Minority Report."

How should companies develop these next-generation apps and interfaces? Our brief on mobile and speech requirements outlines three steps: 1) Adapt known localization best practices to the newest platforms – there is a lot that carries over, but much that needs to be learned; 2) enable spoken interactions in the user's choice of language; and 3) transform phones into the AI-driven virtual assistants that everyone craves. 

Localization teams, until recently focused on optimizing current workflows, will feel a new urgency as Global 3000 companies and their LSP partners rush to support this blossoming assortment of new devices and human-computer interfaces. Smart managers and practitioners will recognize the new opportunities for growing their visibility and importance to their companies' next generation of product, service, and application development and localization. 

About the Author

Donald A. DePalma

Donald A. DePalma

Chief Research Officer

Focuses on market trends, business models, and business strategy

Related

Automated Interpreting: A Blessing or a Curse?

Automated Interpreting: A Blessing or a Curse?

Some people feel that using artificial intelligence (AI) to interpret human speech is a curse becaus...

Read More >
Simple Actions for Achieving More Efficient Localization Processes

Simple Actions for Achieving More Efficient Localization Processes

While the goal for project management has long been full automation (“lights-out”), few organizati...

Read More >
Wanted: Expert Project Managers

Wanted: Expert Project Managers

Are you an expert project manager or interpreting scheduler? We need to talk! Project management – ...

Read More >
Generative AI and Copyright: Unraveling the Complexities

Generative AI and Copyright: Unraveling the Complexities

A common worry about generative AI (GenAI) is that the content that it creates may be subject to cop...

Read More >
AI in Multimedia Localization: How to Spot the Winners and Avoid the Scams

AI in Multimedia Localization: How to Spot the Winners and Avoid the Scams

During our research into multimedia localization – and all the new AI-enhanced tools that are sprou...

Read More >
Is GenAI Going to Replace NMT?

Is GenAI Going to Replace NMT?

It is incredible to think that, less than eight years after the first publicly available neural mach...

Read More >

Subscribe

Name

Categories

Follow Us on Twitter