What Do Harry Potter and Generative AI Have in Common? - Our Analysts' Insights
X

Our Analysts' Insights

Blogs & Events / Blog
11Apr

What Do Harry Potter and Generative AI Have in Common?

Listen to This Blog
 

 

We’ve all been entertained by spells going wrong in the Harry Potter books and movies. Harry mistakenly inflates his aunt, turning her into a hot air balloon. Ron ends up vomiting slugs in a hex that backfired. And even Hermione lands herself in the hospital for weeks after a botched transformation spell.

 

What Does Any of It Have to Do with Generative AI?

Generative AI refers to systems capable of generating content – such as OpenAI’s GPT-4 (or its predecessor ChatGPT), Bing AI Chat, Google BARD. Some tools can also generate images – such as DALL-E or Midjourney – or music – such as Jukebox and MuseNet.

If you think of artificial intelligence as the magic, then the prompt is the spell you use to invoke the magic. And not all spells – or prompts – are created equal.

If you use the wrong spell, results can be catastrophic and will likely entertain readers and moviegoers. Likewise, if you use a poorly formulated prompt, you are likely to get a good laugh or shake your head wondering if the result is even worth editing. Human nature dictates that the most offending or comical mistakes will be posted for the whole world to see – social media makes sharing these mistakes even easier. After all, more people are interested in pointing out the faults of AI than figuring out how to use the tools to improve their business.

TimesofIndia_UsingCh...
Source: Times of India

But that’s not a new phenomenon. Take the example of machine translation, much ink has been used to point out all its faults over the last decades. Yet when we look at data now, over 70% of LSPs acknowledge using MT in their operations – even if not yet on a grand scale. Naysayers get converted over time.

Generative AI Uses for LSPs

Does that mean we’ll all be using generative AI tools like GPT-4, BARD, or the multitude of other tools to translate content any time soon? Maybe one day. But for now, you are right to have reservations because you can derive better results through trained neural MT engines, although these tools do have certain advantages.

However, let’s not forget that generative AI can do a lot more than just translate. LSPs should test a variety of scenarios. You can customize a sales email to a client persona. You can generate a first draft of a blog. You can reword a clumsy email answer to a client complaint. And you can do that in multiple languages, although the quality of results will vary wildly between them. And that’s just the beginning. What use cases have you already found? Our research team would love to chat with you about what you tested and the results (email helene@csa-research.com).

At CSA Research, we’re currently testing GPT-4 in a variety of scenarios, comparing for example, real cold emails from LSP salespeople to generative AI’s output. The latter regularly outperforms what we saw from less experienced LSP salespeople or marketers.

Let the Magic Happen

Remember to run your tests with an open mind. Instead of finding faults, identify areas where these new technologies can create efficiencies. Don’t expect perfection. AI will spew out incorrect facts – there’s not much to do about that until it gets smarter.

However, if AI outputs the right content but not written how you want it to be, that’s something you can fix with better prompts. Now is the time for all LSPs to learn how to write good spells – oh sorry, prompts – to let the magic of AI happen.

About the Author

Hélène Pielmeier

Hélène Pielmeier

Director of LSP Service

Focuses on LSP business management, strategic planning, sales and marketing strategy and execution, project and vendor management, quality process development, and interpreting technologies

Related

Is It Time to Recruit a Generative AI Specialist?

Is It Time to Recruit a Generative AI Specialist?

It Depends As your organization pivots toward integrating generative AI (GenAI) into more of its ...

Read More >
Bigger Isn’t Better, Or Why FLLMs Matter

Bigger Isn’t Better, Or Why FLLMs Matter

In October 2023, we argued that the future of AI would be in “focused large language models” (FLLM...

Read More >
Automated Interpreting: How Far Have Implementations Come Along?

Automated Interpreting: How Far Have Implementations Come Along?

The topic of automation has taken the interpreting industry by storm. On the one hand, enthusiasts b...

Read More >
Localization Reinvention

Localization Reinvention

Back in the day when I first began working in localization, we didn’t have a translation management...

Read More >
The Devil’s Dictionary –  Language Services Edition

The Devil’s Dictionary – Language Services Edition

When friends and family hear what I’m working on these days, they typically ask: 1) won’t AI elimi...

Read More >
Automated Interpreting: A Blessing or a Curse?

Automated Interpreting: A Blessing or a Curse?

Some people feel that using artificial intelligence (AI) to interpret human speech is a curse becaus...

Read More >

Subscribe

Name

Categories

Follow Us on Twitter