21Jun
Sentient AI: Parrot, Parity, or Parody?
Listen to This Blog
In the 2008 thriller Eagle Eye, a rogue “super AI” (called ARIIA) developed by a government agency manipulates humans, the digital environment, and physical objects – often in completely preposterous fashion. Its mission is to maneuver the protagonist to a place where his biometric properties can substitute for those of his deceased twin brother. Working through this agent, ARIIA would be released from hard-coded constraints that keep it from taking over the government. The film represents the common misperception that AI – in this case in a pre-neural network incarnation – is far more capable than it actually is. Some AI experts have long predicted that it is only a matter of time before an artificial intelligence becomes self-aware and exceeds the capabilities of humans. In 1993 sci-fi author Vernor Vinge labeled that eventuality the “singularity,” a term that the futurist Ray Kurzweil subsequently popularized.
Like perfect machine translation, the coming of the singularity is always imminent, and yet never achieved. Experts debate whether it will arrive, but many of them stipulate that self-awareness and conscience will emerge from an artificial superintelligence (ASI) that would drive the singularity. Assuming it happens, they also ask whether such an ASI would have a human-like moral code towards people, be indifferent to them, or, like ARIIA in Eagle Eye, be hostile to them.
Last week, the Washington Post published an article about Blake Lemoine’s claim that his employer Google’s LaMDA language model/chatbot system had achieved sentience and had a “soul.” Lemoine, an engineer in the company’s responsible AI group, based his assertion on a dialogue in which LaMDA expressed human-seeming sentiments and concepts. Google placed Lemoine on leave, thereby sparking renewed discussion about what machine sentience is and what it means. It also stoked a slew of debate about whether the dialogue demonstrated true awareness on the part of the chatbot or was simply an illusion of such. Most researchers dismissed Lemoine’s claims, which were at least partially informed by his own religious beliefs.
Machine Translation Raises Similar Concerns
These discussions are reminiscent of – and closely related to – debates about whether machine translation has achieved “human parity," as some researchers have claimed. Others may argue that it is closer to “human parody” but both LaMDA and the claims about MT raise fundamental questions about what it means to be human, to be intelligent, and to have a “soul” (if people could ever agree on just what that means). On one extreme, scholars such as Stephen Hawking have maintained that these questions are largely irrelevant if an evaluator cannot tell the difference between a human and a computer in a conversation – that’s the assertion of the famous Turing Test. On the other hand, philosopher John Searle’s famous “Chinese room” thought experiment contended that approaches such as the Turing Test cannot demonstrate actual intelligence or sentience, no matter how convincing their output may be.
The German natural language processing (NLP) researcher Aljoscha Burchardt falls in the latter camp, when he argues that current-generation AI is fundamentally a “parrot”—that is, it is capable of repeating things it has seen in some fashion in its training data, but without understanding. As he put it, “[AI] is a parrot. A sophisticated parrot, but still a parrot.” Both MT and Lemoine’s transcript are illustrative of the difficulties of distinguishing between a parrot, something at human parity, and human parody. Does LaMDA actually understand the questions and respond to them from a position of true understanding and introspection? Or does it generate convincing-sounding results based on its massive amounts of training data that contain statistical patterns that allow it to emulate intelligence? For this latter case, we should note that even if it is a parrot, it is certainly an impressive one.
Why These Questions Matter for the Language Industry
What (if any) are the implications for the language industry and what light can this industry bring to these questions? We see the following:
- An ASI with human-level intelligence could replace human translators. If Lemoine is right that LaMDA has become self-aware and intelligent, it would be only a matter of time before machines could do the work of human translators – as well as a lot of jobs in many other fields currently performed by “meatbags” (what some sci-fi androids call people). Given the massive amounts of content that would benefit from MT delivering human levels of quality at a fraction of the cost, there’s no doubt that it would displace human workers.
- Instead, so far at least, humans are becoming more important. Fortunately for translators, evidence so far suggests that AI is serving as an enabler for linguists rather than as a replacement. If anything, we see humans moving deeper into the process in a “human at the core” rather than “human in the loop” model. Augmented translation approaches increase the value of the contribution that language experts bring to the table by focusing their attention on those things that cannot be automated, at least in the present. NLP helps translators but does not displace them. The fact that human contributions are increasing in value argues against a simple replacement theory.
- Perfect MT would argue in favor of AI self-awareness. As a corollary to the previous points, if an AI could master translation, it would have to negotiate a variety of speech registers and modalities – and be aware of and responsive to the complex connotations of language. If it could convincingly do this across content types and situations it had not been specifically trained to handle, it would be a strong argument in favor of true intelligence. At the very least it would muddy the notion that MT is nothing more than a sophisticated parrot.
- The recursion, introspective functions, and interiority of responsive MT would be needed for intelligence. We have argued that the next steps forward for machine translation will come when it is able to inspect its own output, adapt it recursively in response to new learning, and reflect on it. Without these capabilities, MT will not achieve true human parity. Because systems today lack them, or at best have only indirect approximations, it is difficult to argue that they are truly intelligent. Large language models such as LaMDA do not have these capacities either and the lessons from the language industry show that these are vital.
In summary, AI and MT are not as advanced as the popular press and some researchers would suggest. They may deliver the illusion of awareness because they are convincing parrots, but they have yet to move from parrot to parity. Although machines may reach sentience, it seems that today’s chatbots are more of a “parler” trick than true intelligence. As a result, human translators do not (yet?) have to fear AI any more than we have to fear the evil AI of Eagle Eye or the Terminator film franchise.
About the Author
Senior Analyst
Focuses on language technology, artificial intelligence, translation quality, and overall economic factors impacting globalization
Related
After we published our recent Q3 2024 update on market sizing for the language sector, which was als...
Read More >
In today’s interconnected world, a global enterprise’s success hinges on its ability to produce, r...
Read More >
Partnering with localization teams to achieve internationalization compliance on time every time mea...
Read More >
It Depends
As your organization pivots toward integrating generative AI (GenAI) into more of its ...
Read More >
In October 2023, we argued that the future of AI would be in “focused large language models” (FLLM...
Read More >
The topic of automation has taken the interpreting industry by storm. On the one hand, enthusiasts b...
Read More >