Large language models and language professionals: understanding the promises, risks and impact
Luisa Bentivogli, Fondazione Bruno Kessler, Trento, Italy
ChatGPT is only one of the latest members of the ever-evolving family of so-called large language models (LLMs), i.e. AI systems that use deep neural networks to generate language output based on the patterns they learn from massive amounts of text data. The impressive capabilities exhibited by LLMs across a variety of natural language processing tasks, translation included, are generating motivated excitement but also concern in the research and industry ecosystems, as well as among users at large.
For language professionals, the rise of AI elicits important questions. What are the potential benefits of this technology? What are the possible risks and ethical concerns associated with it? How do the new LLMs compare with current commercial MT systems? Are ongoing debates and the general sentiment different from the reaction to the advent of neural MT back in 2016? Are LLMs becoming the language services industry’s new toolkit? What are the major factors in their practical adoption in production workflows? Will we need to rebrand the translation profession and rethink how translators are trained?
In this talk, I will introduce LLMs and how they work, with a focus on multilingual and cross-lingual aspects. Then I will discuss the major themes around LLMs, as outlined above, with the aim of providing a guide to navigate the complex landscape of LLMs and their role in shaping the future of translators’ work and society at large.
Luisa Bentivogli heads the Machine Translation (MT) Unit at Fondazione Bruno Kessler. Her research interests include evaluation of human language technologies, translation technologies for translators, creation and annotation of multilingual corpora, and computational lexicography in multilingual environments. Her current focus is on creating multilingual resources for speech translation (ST) and on assessing and mitigating gender bias in MT and ST, for which she recently won an Amazon Research Award. She has contributed to projects resulting in products such as MateCat, ModernMT and MateSub. She regularly organizes events on MT for translators and the scientific community, such as the School of Advanced Technologies for Translators.