Localization Demands Morph to Support Mobility, Speech, and Intelligence
While often complex and costly, localization is a well-established practice at many companies. CSA Research's interviews and surveys with both Global 3000 companies and language service providers show that the best of these organizations have tamed the rhythm of localization – processes and schedules are understood and under control. Many plan to throttle back their localization budgets as they work to optimize their current processes and tools for the 10 or 20 languages they support before adding any more. But new devices enabling greater mobility and ubiquity are raising expectations for experiences localized into many more languages, for many more devices.
What's driving this change? An assortment of mobile, handheld, wearable, hygienic, prosthetic, and cosmetic devices have begun to create growth markets. Last year the ITU reported that mobile-broadband networks are within reach of 84% of people on the planet. It predicted that the total number of mobile-broadband subscriptions would hit 3.6 billion by the end of 2016. The Internet of Things promises to connect even more of the objects in our universe, both with humans and from machine to machine. These new devices bear witness to – and participate in – commercial transactions, life experiences, and every manner of personal and business activity.
CSA Research finds that three attributes of these increasingly ubiquitous mobile, wearable, IoT, and other devices require a fundamental change in how we design interfaces and experiences:
- Proximity. Mobile phones and wearables are constant companions, always on and at hand, on wrist, on head, or in ear. Studies of cellphone usage provide a sense of what this on-the-body experience means – hundreds, if not thousands, of touches, glances, or swipes per day. While such interactions are addictive for some, the upside is that these devices go beyond the PC's promise of information at your fingertips. Instead, they stream selected or unfiltered information to handhelds, wearables, sensors, and embedded devices.
- Immediacy. The urgent chirps and vibrations that alert us to events have a much greater urgency when they originate in our pockets or in our ears rather than from a PC. Brain research has shown that people react to text message pings with releases of dopamine, a chemical responsible for feelings of pleasure and euphoria. "People find themselves so drawn to their devices that elements of addiction are at play," said Dr. David Greenfield, a clinical professor of psychiatry at the Center for Internet and Technology Addiction. "If that desire for a dopamine fix leads us to check our phones while we're driving, a simple text can turn deadly." While that research relates to driving while texting, we all react to that same insistent ping from our devices in boardrooms, bars, and boudoirs.
- Intimacy. Wearables and mobiles find their way into our most personal moments as we rely on them to provide ubiquitous connectivity and computing. One usage study discovered that 87% of participants brought their phones out of sleep mode between midnight and 5am at least once during the sample period. Throughout the day and night, we call upon an assortment of apps, bots, and services to manage our interpersonal, medical, travel, and recreational activity.
These three characteristics of close-at-hand devices require more contextual awareness, which can most effectively be communicated through a nuanced interface in the user's language and adapted to local markets. They also validate the long-held CSA Research contention that consumers prefer native "comfort-language content" when sitting around in their pajamas. They ultimately even chip away at the professed tolerance for foreign-language content that businesspeople have when engaging with websites not localized for their country. Successful companies will elevate the business requirement to accurately and pervasively localize the customer experience.
The demands of these constant electronic companions don't end with written native-language support. Spoken language will ascend to a role as the most natural interface between humans and computers. While we tap on our watch to respond, we would be much more comfortable talking to it. The microphone and speakers emerge as the most natural human-computer interface (HCI) – and most people will want to converse in their everyday language using Alexa, Cortana, Siri, or their phone's version of a virtual assistant. Of course, the spoken word isn't appropriate everywhere, so other HCI options include haptic and taptic interfaces, finger-writing on dashboard screens in luxury autos, and a gesture-driven model reminiscent of the one used by Tom Cruise in "Minority Report."
How should companies develop these next-generation apps and interfaces? Our brief on mobile and speech requirements outlines three steps: 1) Adapt known localization best practices to the newest platforms – there is a lot that carries over, but much that needs to be learned; 2) enable spoken interactions in the user's choice of language; and 3) transform phones into the AI-driven virtual assistants that everyone craves.
Localization teams, until recently focused on optimizing current workflows, will feel a new urgency as Global 3000 companies and their LSP partners rush to support this blossoming assortment of new devices and human-computer interfaces. Smart managers and practitioners will recognize the new opportunities for growing their visibility and importance to their companies' next generation of product, service, and application development and localization.
About the Author