Governmental Focus on AI May Bring Scrutiny to the Language Sector - Our Analysts' Insights
X

Our Analysts' Insights

Blogs & Events / Blog
28Feb

Governmental Focus on AI May Bring Scrutiny to the Language Sector

Listen to This Blog
 

 

In the summer of 2022, the industry saw the first major regulation of the use of machine translation in the United States, when the US Code (see page 47,861) was amended to forbid use of MT without revision in certain medical environments where Federal funds are used. Although this regulation was not a general one, you would do well to bet that it also won’t be the last such rule.

The explosion of interest in generative AI technology like ChatGPT has led to general calls for the regulation of artificial intelligence. Such efforts will also affect the language industry. To date neural machine translation has been the focus point of AI for language professionals and the general public. With its proliferation – often driven by hyperbolic press releases without serious examination of risk – it is just a matter of time before someone makes a good career as a lawyer specializing in translation-related cases. Although today it is unclear who would be responsible for a critical mistranslation stemming from MT on a company’s website, CSA Research predicts that within a year some unlucky company will become the textbook case that decides this matter.

Enter the AI Act (or a more readable summary and analysis here), a proposed regulation in the European Union. In the works since 2021 after a European Council call to action in 2019, it is an ambitious effort to control the proliferation of AI-driven applications based on an assessment of risks.

Its specific objectives are to:

  • Ensure that AI systems placed and used on the Union market are safe and respect existing law on fundamental rights and Union values.
  • Ensure legal certainty to facilitate investment and innovation in AI.
  • Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems.
  • Facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation.

Even though the scope of regulation is limited to the EU, the Act will effectively regulate use of AI on a broader scale. Why? International enterprises tend to adapt their practices to the strictest regulation they will face, so even US-based firms will tend to follow the AI Act when it goes into force. The European Commission estimates compliance costs of €3,000–7,000 per year for implementers of “high risk” AI, but much less for lower risk scenarios. We therefore expect that most implementers would face relatively minimal costs.

Significantly, the Act relies on self-certification and policing, with no dedicated enforcement or verification apparatus. This suggests that the EU intends to take a hands-off approach, relying on the Act when problems arise. As a results, some critics argue that it does not go far enough and will fail to protect the rights of European citizens and there is no guarantee it will not acquire more legal force over time.

Why Does the AI Act Matter for the Language Sector?

Is there cause for concern in the language sector? At first glance, it would seem not, but a closer examination of the Act shows that some applications of MT and, more broadly, AI in the language industry could lead to challenges when its technologies are applied in three areas that are subject to heightened scrutiny:

  1. Employment, worker management, and access to self-employment. As AI features in translation management systems (or TMSes) increasingly determine which linguists will get jobs and control remuneration, these technologies may be considered high risk. For example, if an LSP deploys a quality estimation tool and pegs payment models to it, that tool would affect worker employment and could be considered a high-risk application.
  2. Law enforcement, migration, asylum, and border management. To the extent that these areas interact with language, machine translation and machine interpreting will play a role in their ability to deliver desired results. Language technologies would clearly fall under the high-risk category if their output could influence legal cases.
  3. Access to and enjoyment of essential services and benefits. If you or a language services partner use MT or other AI-driven features to provide access to your content or services and they could be considered essential, but the tools fail to meet some standard of performance, would you be considered liable for a shortfall? Today, this area is almost completely unregulated, but the scrutiny of the AI Act could mean that you would be required to undertake actions to ensure that AI does not hinder access to essential services and benefits. 

Along the same lines, government users may face additional pressure to restrict use of MT in order to ensure that content meets strict requirements, even at the expense of providing translated content at all. In other words, the best route to compliance may be to avoid translation in the first place. Ironically, this might mean that AI would be held to a higher standard than typical human translation processes and thereby discourage organizations from using AI to add language access cost-effectively.

Prepare for an Uncertain Future

Although it is not obvious that machine translation, AI-driven TMSes, and other language sector technologies will be considered high risk, it is also equally unclear that they won’t be. All it takes is one instance of harm arising from these systems for their apparent risk category to change. And, in the U.S., with its significantly more litigious approach to such matters, something may seem fine until a court decision makes it apparent that it is not and sparks a flurry of copycat lawsuits.

In short, the language sector has, so far, largely escaped the attention of regulators and law makers, something that has enabled it to flourish, but which also puts it at risk from future regulation. As calls to regulate general AI increase, the sector is likely to feel pressure it is not used to. All developers and adopters of these technologies must understand the changing legal and regulatory landscape and develop plans for how they will prepare for new regulations. No longer can companies assume that obscurity will protect them.

 

 

About the Author

Arle  Lommel

Arle Lommel

Senior Analyst

Focuses on language technology, artificial intelligence, translation quality, and overall economic factors impacting globalization

Related

The Language Sector Slowdown: A Multifaceted Outlook

The Language Sector Slowdown: A Multifaceted Outlook

After we published our recent Q3 2024 update on market sizing for the language sector, which was als...

Read More >
The Global Enterprise Content Production Line

The Global Enterprise Content Production Line

In today’s interconnected world, a global enterprise’s success hinges on its ability to produce, r...

Read More >
Developers: Open Windows in Your Silo to Collaborate

Developers: Open Windows in Your Silo to Collaborate

Partnering with localization teams to achieve internationalization compliance on time every time mea...

Read More >
Is It Time to Recruit a Generative AI Specialist?

Is It Time to Recruit a Generative AI Specialist?

It Depends As your organization pivots toward integrating generative AI (GenAI) into more of its ...

Read More >
Bigger Isn’t Better, Or Why FLLMs Matter

Bigger Isn’t Better, Or Why FLLMs Matter

In October 2023, we argued that the future of AI would be in “focused large language models” (FLLM...

Read More >
Automated Interpreting: How Far Have Implementations Come Along?

Automated Interpreting: How Far Have Implementations Come Along?

The topic of automation has taken the interpreting industry by storm. On the one hand, enthusiasts b...

Read More >

Subscribe

Name

Categories

Follow Us on Twitter