Automation and Artificial Intelligence

 A report on the automation track of LocWorldWide42

At LocWorldWide42, there were four sessions in the automation (AU) track: Machine Translation for Games: Shaping the Future of the Industry (AU4), Lessons Learned from Evaluating MT Engines at eBay (AU5), Enterprise Path to NMT: Going From “How?” to “Wow!” (AU6), and Integrating Custom-trained Neural Machine Translation into a Continuous Localization Flow in Enterprises (AU9). In addition, session EG12 (engage global users track) titled Want Higher Quality, Consistency and Delivery Speed? Augment Your Translators with Artificial Intelligence also presented a case study of using machine translation (MT). Together, they represent success stories of using MT, in particular, neural machine translation (NMT) in different approaches (MT post-editing and interactive MT) in various domains, including game localization, ecommerce and marketing materials such as product catalogs.

These presentations and their question and answer sessions were very informative and insightful. With so much good content for us report, we would like to focus on those areas that help us build a forward-thinking outlook, in particular, in terms of three pairs of dynamic relationships involved in the practice of MT in the localization industry.

Technology and the market

Since the famous Georgetown – IBM experiment successfully demonstrated the feasibility of MT in 1954, MT research and practice have gone through several paradigms, including rule-based MT, statistical MT and neural MT. In the automation track, NMT definitely took center stage. For example, Cristina Anselmi mentioned that EA’s MT implementation is based on Azure Cognitive Service combined with custom models (AU4), Alexander Gigga introduced lengoo’s application of custom-trained NMT (AU9) and the interactive and adaptive approach to assisting linguists presented by John DeNero at Lilt (EG12) is also based on an NMT system.

MT customization is another highlight. By working with the core of MT and developing a series of proprietary relevant technologies (for example, terminology extraction program, post-editing interface and TMS API connector tool), companies can gain maximum flexibility and control over its application. Like Alexander Gigga mentioned in the Q&A session (AU9), “We see great benefits of having control of the process from the first iteration of having translation memory, the first training of the model. Once our optimization process kicks off, we see huge improvements in MT scores and post-editing time. The continuous learning is an integrated process of not only our models, but also the entire editing technology that we build ourselves.” Lutz Niederer and Angélique Tesar (AU5) also mentioned that they have their proprietary MT and they use a mix of their own technologies or other baseline MT engines to customize it. This is also true with the case that Sathish Chander (AU6) reported. He added that by hiring people to the MT team and working collaboratively, they built more career paths for employees of

The focus of NMT and customization is driven by the market. When in the AU6 discussion, moderated by Kirill Soloviev, panelists were asked about the main trigger to start to deploy MT, the panelists pointed out that they had to meet the greatly increased demand for translation and MT was a viable solution to handle such a huge volume. In addition to the speed increase, clients are happy to see that cost has been greatly reduced as well. The numbers of speed increase and cost reduction are very high according to the success stories reported in these sessions: 60% of speed reduction and 50-60% cost reduction. Moreover, presenters reported that translation quality in this approach basically can meet client’s expectations. There are other benefits that make MT attractive, for example, confidentiality of data (AU4), creating career opportunities (AU6), and enabling/simplifying seamless cross-border trade as well as helping reduce tech-to-market speed not only with MT but also natural language processing (AU5). In the end, we heard many presenters say that they would like to put MT to more uses, for example, expanding content types that leverage MT, like from user-generated content to customer support content, user interface and member-to-member communication, or applying MT to more projects.

Clients and MT suppliers

An interesting trend that we saw from the success stories of implementing MT is that client and MT suppliers tend to have a closer collaboration. They aim to establish and maintain a strong partnership. Giorgio Mattiuzzo (AU9) mentioned that after working with an external full-service MT partner, NI moved to lean organization that focuses on project management leveraging external partners and the vendor, who is more technology-driven, tends to provide the whole package, including both MT tech and linguists, as well as quality management. Alessandra Binazzi (EG12) also emphasized the strong partnership with ASICS Digital’s external vendor, who also provided full service, including MT-centered technologies and linguists. She said they treated their vendor as a tech company that could not only provide full-service language to them but also help them innovate.

This type of relationship is quite new. According to Giorgio and Alessandra, MT suppliers and clients started to get connected at the end of 2018, launched projects or pilot projects in 2019 and saw satisfying results in 2020. It makes sense not only in terms of cost-saving and speed increase, but also from a machine training perspective. It is an effective collaboration model for MT vendors to make their MT work more adaptive and interactive. It is an effective way to allow MT systems to learn from the translator. A translator creates sentence pairs in their translation, which are very good training materials for a machine since it covers the right content in the correct style. When Alexander Gigga (AU9) answered the question regarding what data are used for custom training apart from translation memory, he included the interaction of the translators with MT that they monitor and use in this learning cycle where the machine gets better. John DeNero (EG12) also pointed out that in the past two years, the real-time adaptation of Lilt’s MT has improved greatly. With more and more linguists’ input and feedback, machines learn quickly and offer more accurate suggestions for translators.

Human and machine

In theory, FAHQT (fully automatic high-quality MT) is not possible since the boundaries of words in different languages are so vague and the emotional and international connotations are so extensive (see Norbert Wiener’s letter dated April 30, 1947 regarding Warren Weaver’s question about MT). In practice, as long as the real audience for translation is humans, we are always the party who best understands and expresses human experiences. Thus, no matter what, humans are an indispensable part of the MT service cycle. However, the role of linguists in the MT-driven translation system is changing silently and some of these changes might be difficult to adapt to at the beginning.

Translators’ reaction to MT implementation was a topic that no presenter could avoid. For this question, most presenters admitted that at the beginning there was resistance, and to address this issue, linguists needed to understand what MT was and why it was adopted. Being transparent and sharing more information about it is important to achieve a positive mindset. Cristina Anselmi (AU4) explained in detail how EA handles this issue in game localization. In order to address some members’ quality concerns and the fear of automation, they provided customized training to internal and external teams and kept them updated on this technology so as to bring everyone on the same page. Cristina emphasized the importance of teamwork when they deploy MT, including the involvement of external teams and internal MT task force to mobilize stakeholders and perform continuous quality evaluation.

Linguists need to adjust their roles in the MT-driven translation cycle. For example, Cristina Anselmi (AU4) mentioned they created a new professional role for the deployment of MT — linguistic experts — who are experts in different games and native speakers of specific languages. They help with quality evaluation and data selection for machine training. In the use case that Lutz Niederer and Angélique Tesar (AU5) introduced, MT evaluators are mostly translators, specialized in the content, terminology and style guide. They also emphasized the importance of reviewing skills for a translator.

Indeed, if we look at how MT works with humans, we can find that linguists are gradually moving away from their traditional work style. Even in the scenario of interactive MT, humans are reviewing machines’ suggestions rather than “translating” the source text themselves. In this sense, linguists are not translating, but reviewing, evaluating and training machines. On the other hand, an MT-centered system might be able to create a translation environment that makes users “feel” like they are translating. For example, as John DeNero mentioned in his presentation, the way in which humans use an NMT system affects everything important: translator resistance, speed and the final quality of a translation. In this regard, an interactive and adaptive NMT system that responds to linguists as they type, helps with all three of these dimensions compared with post-editing.

Some tricky issues might arise from this dynamic relationship between human and machine. For example, when humans have a powerful MT engine to support their work, the industry actually raised the threshold for a qualified linguist, rather than lowering it, as the engine grows in its capabilities. In other words, the linguists’ role will be reevaluated based on ever-changing criteria such as those on MT evaluation, quality control and machine training. Moreover, there were questions about linguists’ pay rate adjustment due to MT improvements in the Q&A sessions. Obviously, it would be nice if stakeholders consider these areas and plan ahead in this regard.

Concluding remarks: NMT and AI

Human beings are stepping into an age of artificial intelligence (AI). AI is going to play an increasingly larger role in the language industry and NMT is a great example to demonstrate this trend. We have seen from the automation track that NMT and AI have brought considerable economic benefits and innovative practices. Here I would like to thank all the presenters’ wonderful content contributions to LocWorldWide42, and more importantly, their significant pioneering work in AI, which comes from hard work and sincere dedication to the industry.

Stepping back, let’s pore over the concept of AI again. Elaine Rich defined AI in her 2009 edition of Artificial Intelligence as “AI is the study of how to make computers do things which, at the moment, people do better.” In this perspective, AI is a dynamically evolving concept and NMT is more like an artificial agent than a machine with its own cognitive abilities that learns and grows very fast. This dynamic feature also means all the other parties, including humans, who are involved in the AI-powered translation system, should adapt to changes led by the evolution of NMT. A very challenging yet promising future, isn’t it? We look forward to more discussions and interactions about this topic in the next LocWorld event.

About the author

Peng Wang
Peng Wang

Peng Wang is a freelance conference interpreter with the Translation Bureau, Public Services and Procurement Canada and a part-time professor for the School of Translation and Interpretation, University of Ottawa. Her current research interests include the cognitive aspects of interpreting and translation, terminology and documentation, and multilingual data analysis. Peng began conducting corpus-based translation studies at the University of Liverpool and later she worked in the corpus linguistic program at Northern Arizona University.