Automate Less, Minimize More
If your organization creates or implements software – and whose doesn’t these days in some form or another – you have to figure out ways to test it to ensure that it will meet international user needs. Our survey and interviews with 21 global enterprises that depend on automated testing in February and March 2020 demonstrate that software testing only finds what’s wrong after the fact and within the constraints of the original design. More staff performing more automated tests will not discover that the original product or service concept doesn’t apply in a particular market, nor that the home market experience is missing or broken. Whether automated or manual, companies should strive to make testing as intelligent as possible by exploring how to solve problems before they appear.
What Drives the Move to Automated Testing?
As the content types and volumes required by local markets continue to mushroom, we asked interviewees which issues originally pushed them to investigate or implement automated testing. In addition to common themes such as cost reduction and scalability, they highlighted the impact of continuous development models and ongoing mergers and acquisitions as significant drivers.
- Impact of continuous development. Many localization teams still scramble to optimize faster turnarounds to meet product release schedules. Whether development teams run under a flavor of Agile, continuous development, or DevOps, the result is the same for localizers: a landscape with an exploding number of releases with dev cycles of two-to-four weeks (or less). In many cases, QA teams are much smaller now than in the past because developers are also responsible for building and executing their own test cases for the source language. Such dispersed testing models force localization staff to run around collecting test suites, rather than working with just one or a few people within testing teams as was the case in the past.
- Mergers and acquisitions. Firms that continue to expand through merging with or acquiring other firms, teams, intellectual property, or expertise in the form of software code are almost always pushed to automate more of what they do. Automation allows them to keep pace with the integration required to realize the benefits from this growth model. Manual testing simply cannot scale in terms of velocity or cost under M&A scenarios.
Analyze Your Company’s Real Goals for Automated Testing
How will it improve the customer experience for local markets? Could resources be better applied in other ways to achieve some of these same goals? Carefully analyze processes around testing to ensure that you’re not 1) trying to make up for the lack of context for translators and reviewers; or 2) dealing with internationalization issues that should be fixed upstream before they enter localization workflows.
- How will (expanding) automated testing really benefit customers? Put yourself in the shoes of your prospects and customers in local markets. Review their profiles, expectations, and journeys with your brand. How exactly will implementing (more) automated testing benefit them? Or, will it? Will the additional investment ensure higher quality, faster turnaround times, lower costs, and/or meeting critical key performance indicators (KPIs)? Are customers clamoring for higher quality or faster turnaround times? Are competitors starting to pull ahead because they’re already excelling in these areas? The answers to these questions will help elucidate the true value of (expanded) automated testing, and thus lead to more informed decisions.
- Would time, money, and people be better spent on something else? Brainstorm around how to meet KPIs if automated testing is not an option. Collaborating with designers, creators, and coders to reduce the required amount and level of testing may reduce the need for a high level of automated testing over time. For example, could you test less if designers minimized textual elements?
- Is testing covering up problems that should be fixed upstream? You may also achieve quality goals faster by improving the quality of the original source code or by implementing an ongoing internationalization and localization training program for current and newly hired developers and testing staff. Interviewees report that the quality of the writing in original software strings directly affects their ability to keep up with Agile teams. You might even consider a program that compensates your user community for executing a higher level of manual testing.
- Are you implementing automated testing for the wrong reasons? Carefully analyze processes around testing. Interviewees strongly advise against executing extra automated testing when the real problem is a lack of context for linguists or internationalization issues that are not addressed during design and development.
Consider the larger question of why you’re testing as much as you are, rather than fixating on what to automate next. Review all sources of input – text, multimedia, code, marketing programs – and how you’re currently testing them. Identify how each one can be reduced, improved, and adapted for a better fit for target audiences, regardless of language or geographic location. COVID-19 provides the perfect opportunity to run these audits as all organizations seriously rethink what is needed to acquire and keep their customer bases. Only after considering and eliminating other ways to ensure quality should you consider (more) testing automation.
About the Author