Governmental Focus on AI May Bring Scrutiny to the Language Sector - Our Analysts' Insights
X

Our Analysts' Insights

Blogs & Events / Blog
28Feb

Governmental Focus on AI May Bring Scrutiny to the Language Sector

Listen to This Blog
 

 

In the summer of 2022, the industry saw the first major regulation of the use of machine translation in the United States, when the US Code (see page 47,861) was amended to forbid use of MT without revision in certain medical environments where Federal funds are used. Although this regulation was not a general one, you would do well to bet that it also won’t be the last such rule.

The explosion of interest in generative AI technology like ChatGPT has led to general calls for the regulation of artificial intelligence. Such efforts will also affect the language industry. To date neural machine translation has been the focus point of AI for language professionals and the general public. With its proliferation – often driven by hyperbolic press releases without serious examination of risk – it is just a matter of time before someone makes a good career as a lawyer specializing in translation-related cases. Although today it is unclear who would be responsible for a critical mistranslation stemming from MT on a company’s website, CSA Research predicts that within a year some unlucky company will become the textbook case that decides this matter.

Enter the AI Act (or a more readable summary and analysis here), a proposed regulation in the European Union. In the works since 2021 after a European Council call to action in 2019, it is an ambitious effort to control the proliferation of AI-driven applications based on an assessment of risks.

Its specific objectives are to:

  • Ensure that AI systems placed and used on the Union market are safe and respect existing law on fundamental rights and Union values.
  • Ensure legal certainty to facilitate investment and innovation in AI.
  • Enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems.
  • Facilitate the development of a single market for lawful, safe, and trustworthy AI applications and prevent market fragmentation.

Even though the scope of regulation is limited to the EU, the Act will effectively regulate use of AI on a broader scale. Why? International enterprises tend to adapt their practices to the strictest regulation they will face, so even US-based firms will tend to follow the AI Act when it goes into force. The European Commission estimates compliance costs of €3,000–7,000 per year for implementers of “high risk” AI, but much less for lower risk scenarios. We therefore expect that most implementers would face relatively minimal costs.

Significantly, the Act relies on self-certification and policing, with no dedicated enforcement or verification apparatus. This suggests that the EU intends to take a hands-off approach, relying on the Act when problems arise. As a results, some critics argue that it does not go far enough and will fail to protect the rights of European citizens and there is no guarantee it will not acquire more legal force over time.

Why Does the AI Act Matter for the Language Sector?

Is there cause for concern in the language sector? At first glance, it would seem not, but a closer examination of the Act shows that some applications of MT and, more broadly, AI in the language industry could lead to challenges when its technologies are applied in three areas that are subject to heightened scrutiny:

  1. Employment, worker management, and access to self-employment. As AI features in translation management systems (or TMSes) increasingly determine which linguists will get jobs and control remuneration, these technologies may be considered high risk. For example, if an LSP deploys a quality estimation tool and pegs payment models to it, that tool would affect worker employment and could be considered a high-risk application.
  2. Law enforcement, migration, asylum, and border management. To the extent that these areas interact with language, machine translation and machine interpreting will play a role in their ability to deliver desired results. Language technologies would clearly fall under the high-risk category if their output could influence legal cases.
  3. Access to and enjoyment of essential services and benefits. If you or a language services partner use MT or other AI-driven features to provide access to your content or services and they could be considered essential, but the tools fail to meet some standard of performance, would you be considered liable for a shortfall? Today, this area is almost completely unregulated, but the scrutiny of the AI Act could mean that you would be required to undertake actions to ensure that AI does not hinder access to essential services and benefits. 

Along the same lines, government users may face additional pressure to restrict use of MT in order to ensure that content meets strict requirements, even at the expense of providing translated content at all. In other words, the best route to compliance may be to avoid translation in the first place. Ironically, this might mean that AI would be held to a higher standard than typical human translation processes and thereby discourage organizations from using AI to add language access cost-effectively.

Prepare for an Uncertain Future

Although it is not obvious that machine translation, AI-driven TMSes, and other language sector technologies will be considered high risk, it is also equally unclear that they won’t be. All it takes is one instance of harm arising from these systems for their apparent risk category to change. And, in the U.S., with its significantly more litigious approach to such matters, something may seem fine until a court decision makes it apparent that it is not and sparks a flurry of copycat lawsuits.

In short, the language sector has, so far, largely escaped the attention of regulators and law makers, something that has enabled it to flourish, but which also puts it at risk from future regulation. As calls to regulate general AI increase, the sector is likely to feel pressure it is not used to. All developers and adopters of these technologies must understand the changing legal and regulatory landscape and develop plans for how they will prepare for new regulations. No longer can companies assume that obscurity will protect them.

 

 

About the Author

Arle  Lommel

Arle Lommel

Senior Analyst

Focuses on language technology, artificial intelligence, translation quality, and overall economic factors impacting globalization

Related

The Devil’s Dictionary –  Language Services Edition

The Devil’s Dictionary – Language Services Edition

When friends and family hear what I’m working on these days, they typically ask: 1) won’t AI elimi...

Read More >
Automated Interpreting: A Blessing or a Curse?

Automated Interpreting: A Blessing or a Curse?

Some people feel that using artificial intelligence (AI) to interpret human speech is a curse becaus...

Read More >
AI Increases Collaboration Opportunities for Product Managers and Localization Teams

AI Increases Collaboration Opportunities for Product Managers and Localization Teams

Whether or not product managers have direct responsibility for the international success of their pr...

Read More >
The Alphabet Soup of Language and Climate Change: ICT, MT, AI, H20, CO2, etc.

The Alphabet Soup of Language and Climate Change: ICT, MT, AI, H20, CO2, etc.

In the streaming adaptation of Isaac Asimov’s Foundation sci-fi novels, Synnax became a water plane...

Read More >
Wanted: Expert Project Managers

Wanted: Expert Project Managers

Are you an expert project manager or interpreting scheduler? We need to talk! Project management – ...

Read More >
Generative AI and Copyright: Unraveling the Complexities

Generative AI and Copyright: Unraveling the Complexities

A common worry about generative AI (GenAI) is that the content that it creates may be subject to cop...

Read More >

Subscribe

Name

Categories

Follow Us on Twitter