AI-Powered Language Tools for Translators and Interpreters in 2025 (Keynote)

Episode 3 December 02, 2024 00:58:37
AI-Powered Language Tools for Translators and Interpreters in 2025 (Keynote)
LangTalent Podcast
AI-Powered Language Tools for Translators and Interpreters in 2025 (Keynote)

Dec 02 2024 | 00:58:37

/

Hosted By

Eddie Arrieta

Show Notes

AI technology is set to transform the landscape of translation and interpretation. Discover essential AI tools that will reshape how language professionals work in 2025. Laszlo Varga will showcase innovative solutions designed to enhance productivity, accuracy, and collaboration, helping you stay ahead.

Laszlo Varga | Nimdzi Insights

View Full Transcript

Episode Transcript

[00:00:03] Speaker A: AI powered language tools. I didn't want to say 10 because you know there are so many. I don't know if we'll do 10 last will tell us but we have said 10. Let's say AI powered language tools for translators and interpreters. For 2025, AI technology is set to transform the landscape of translation and interpretation discovery. 10 essential AI tools that will reshape how language professionals work. In 2025, we'll showcase innovative solutions designed to enhance productivity and accuracy and collaboration. Helping you stay ahead. This is with Mr. Laszlo Varga. So please help me welcome Laszlo Varga, who is a senior consultant in NIMCY Insights with over a decade of experience in localization in the localization industry. Specializing in supply chain process optimization and strategic innovation. Known for his passion for understanding what drives success in individuals, teams and businesses, he combines expertise in service delivery, solution development and organizational change to help clients make impactful decisions. A keen advocate of continuous improvement and agile methodologies, Laszlo brings a systemic end to end approach to driving value and fostering growth. So please help me welcome Mr. Laszlo Varga. Laszlo, welcome Eddie. [00:01:32] Speaker B: Thank you so much and hello to everybody around the world right now. [00:01:37] Speaker A: You just can't see it, but they are all clapping right now. Please, those that are you might be able to see it in the chat. Please clap, put emojis of claps and then we'll all imagine the clouds last. Thank you so much for doing this. [00:01:52] Speaker B: Thanks Ali for having me and good morning, good afternoon, good evening to some of you online and hello to future me and future everybody who will be re watching this. And please don't laugh too hard when in two, three years time. This presentation looks a little outdated. Things are happening fast. [00:02:11] Speaker A: We will do it again then and then we'll say how amazing it is. I think it's one of the most exciting presentations for me because it just, it's just making it more practical. We talk about a lot about how, you know, we should embrace. Oh, let's embrace. Let's embrace. It's very conceptual. It's okay, I'm ready to embrace then. What are some of the things? [00:02:32] Speaker B: Embrace what? [00:02:33] Speaker A: Yeah, embrace what? So, Laszlo, thank you so much for doing this. And Mila, I will let Laszlo do it. We will be around Laszlo looking at the questions that people have. I encourage everyone to write down the questions on the comments. We have right now, 75 people watching live. So we hope to make the most out of this session. Thank you for doing it. [00:02:56] Speaker B: All right, I'm sharing a screen somewhere so I hope it makes it onto There we go. That's fantastic. The characters will be a little bit small here and there for reading, but I want to say hi to everybody for this presentation about 10 essential tools. And Eddie, you're absolutely right, there are a few more than that as well as the original description was what will be reshaping the industry in 2025. Well, many of these already are reshaping and we can only guess what the tools will be next year that we'll be doing similar. Just a quick introduction, although Eddie, you already did this. I'm lead researcher and consultant at NIMS Insights and I'm very fascinated by technology and do a lot of market research with my team as well as we do technology and process consultancy custom projects for our clients. But I have my fingers in almost everything from user experience and the customer engagement to again, technology. Quick agenda. We're going to do a quick warm up and a backdrop to why is this so important? We're going to look at the human perspective. Of course we're going to talk. I'm going to talk a lot about LLMs and maybe you will have a few questions as well and further use cases and tools too. So let's start with 10 AI tools. This is a joke of course, because there are only nine of them, but they cover everything from translation to machine translation, post editing, automated LQA, transcribing, even speech translation if necessary. OpenAI can do it all. Let's add quickly a tenth one. Whatever that use case is is probably again OpenAI. It looks like we're done. Eddie, thank you so much for the attention. 10 AI tools named. Well, there's a little bit more to it than that. And we specifically at NIMDI will list more than 1,000 tools in our language technology Radar, which is an interactive online catalog of language solutions. And you can see the various categories that we list in here. We also have a report that is attached to it which kind of details a lot of things about the different technologies categories and players and trends. Very much recommend reading it for a broader view than what is going to get presented right now. Those thousand tools, of course they come in various shapes and forms, just machine translation. I haven't counted them to be honest with you, but it looks like it's a very extensive list. Of course it isn't. There are always new solutions coming out onto the market, new brands that are being deployed and some of them actually may be disappearing. So we keep updating, of course, our list. Again, please go, feel free and visit and find out more about These tools on our web, something that we also published this year as a research company, is what we call the language services development curve. And the reason why this is incredibly important, because it depicts how the industry has been developing, but also how it will be developing in our estimation in the future, which is to say, under that curve, you will find a whole bunch of what look like S curves, yellow S curves. Those are all fundamental technologies that helped the industry grow. Even though every single one of those, when they appeared, there came the fear of, oh, it's going to make the language industry go away from translation memories to machine translation. And right now it's large language models and generative AI. We're pretty sure that in the next couple of years or so, there will be another big wave of very likely, again, some machine learning algorithm, not large language models, but something beyond it, that will give further boost to the growth of the industry. That said, most people on their own, they probably feel that this is what's happening over time, which is going down in this case. Originally, all the language work was done by humans, and then suddenly machine translation appeared and gradually took over some of the human work. And right now we're in the age where machine translation is being augmented with large language models, or even large language models may be doing the machine translation, even to a larger extent than previously, the neural machine translation engine. So even less work for humans. And you can imagine that whatever the next big language AI will be, would it be a platform, would it be a tool, would it be an algorithm that will further reduce human involvement. And because my understanding, most of you out there from Taiwan to Colombia, you are translators or interpreters, but at least working in the industry, you probably think and feel it on your skin, this very same thing. However, for overall the industry, this is more the picture, which is to say, when humans only did the work, then only there were that many translators and interpreters who could. But there was a lot more demand. Machine translation kind of helped fill these shoes. And even though it took over some of the human tasks related to translating and interpreting, actually there's just more work for humans as well. And currently with large language models, we're probably facing the same. There is a step back from demand to say, can this new tech do more than the previous one? And so many of the buying decisions are being delayed. But actually we expect that human work will increase and machine translation, large language models, the AI part, will keep increasing as well, even though at a faster place than the human work. And the same will probably be true for the next S curve, the next technology S curve that will be boosting the language industry. Of course, at some point of time, according to some, we will be hitting the wall, which is called Skynet, and we can all AI will do everything, not just language work, but all sorts of other work as well. And we'll be enjoying our lifelong pensions, every single one of us. We'll see if that ever happens. And I'm not going to say if I'm going to be happy or unhappy about it. I just want to say I will be friendly with our AI overlords. But then it comes to human perspective, which is really important in the industry. Why is it that language technology helps productivity? Well, the Boston Consulting Group created a survey related to large language models and they asked the question how does it help with the performance of individuals? And their finding was, and you may have seen this slide before, in similar shape that 40% productivity increase happened or performance increase happened when certain individuals for given tasks, which are actually very similar to translating, they experienced 40% increase in performance compared to people who did not use GPT4. What usually gets omitted is the other half of the chart which says actually for some tasks it can seriously hurt performance as well, even more than 20% on average. And this is pretty much how many of the big AI labs are thinking they never really targeted the language problem in the first place. They were tackling productivity. And the majority of the developments of the big AI tools nowadays is especially focusing on that. The business problem solving, the kind of imitating thinking, imitating logic, providing solutions to problems that we ourselves wouldn't be investing time and effort into solving otherwise. Not specifically for the language world, but let's talk about this bit, the 40%. What does that really mean? Well, what performance in this case means, is it saved time? Time is the most valuable resource that every one of us have and the most valuable thing that anybody, including everybody on this symposium is saving time. Essentially, when you translate or you interpret, you save time for somebody else. You save time of learning the language machine translation saves time by generating an output that can be later edited, hopefully faster than creating the translation from scratch. Or in some cases, speech translation will create opportunities for companies to use the tool. Instead of having to hire an interpreter and having to go through weeks and weeks of planning and preparation, just plug in a tool. Even though the perfect quality is not required, it will do the job. Saving time is the key. And there are two ways of saving time. A technology can either save my time or my customers time, or save some of my time or all my time. If it saves some of the time, it means it increased productivity, increase performance, or if it saves all time, then it basically means, yep, it replaced the human. We are at the stage with AI tools where they save some of the time. There are plenty of instances where individuals in the industry say a machine replaced my job. Well, actually what the machine did was replace your task, for example, the task of simple translation, and got replaced by quality efforts at the end of the pipeline. Would that be revision? Would that be lqa? Would that be post editing or similar? Moving on. This is what the roles that we typically define in the industry, whether you're a translator or an interpreter, this is more or less the things that we do. A translator translates or edits or post edits or creates text. Maybe if you're a writer, an interpreter would be translating speech and summarizes while translating and is transmitting words in a different language. A vendor manager would be finding translators, maintaining a database, sending feedback. Or a project manager would be creating projects, allocating work, coordinating and messaging, and of course delivering. These are actually not roles, these are tasks. And let's face it, AI or automations, especially with AI, can do some, if not the majority of these to a certain level of confidence. Let's rephrase these roles a little bit, which are more meaningful to the customer, which is the person who enjoys the stuff that gets created. So a translator would rather be, instead of somebody who translates or creates or edits, creates a seamless language experience that enables the user to connect and engage with the content, product and brand, or the interpreter, instead of saying, oh, I'm just translating the words that are being said, I capture and transmit meaning so that the listeners trust and understand what they hear. And these are not tasks, these are values that are being created. And this is the place where AI can help you do these. There is a lot of conversation always in the industry. Where's the value that is being created at the operational level? This is it. Even though typically the C level up top in big enterprises, the executives, they don't really want to hear this and understand this because they don't speak the language that we speak, that linguists speak. But as soon as you start using these expressions and phrases for the value that translators, interpreters and anybody working in the industry create, they will be picking their ears up and say, aha, now I understand why you're working for me. But it's almost all about genai when it comes to technology. Let's give a few examples, or rather let's look at what Large language models are. They are basically, let's just call them general purpose machines. A certain kind of AI that is created by big AI labs typically. And they have the abilities such as they learn from context. You can prompt them, you can inject instruction into the prompt, and more often than not, the models will actually respect those instructions. By now they can also fetch and read external data, or even use external tools as agents. And then you can also be retrained to become specialists instead of general purpose machines. That is fantastic because we humans, we are special machines, biological machines, and we have the ability to respond to each of these. If LLMs can learn from context, we can provide that context. If they respect the instructions, that's great because we can create those. If they can fetch data, we can specify which one that is. And if they can be retrained, well, we can do that. One thing that machines cannot do, we can, is to verify their output. But to be able to do that, we actually need to do what good looks like, or rather what fits for purpose, whichever phrase you use and whichever paradigm you're following. The key here is that there is a necessity for humans to verify the output of large language models, which we can only know what good looks like. Otherwise, this is what we end up doing. People here online who are old enough like me, to have seen the movie from airplane from 1980, you would know that that is a co pilot. And many of the AI tools currently on the market, they're often called co pilots. But co pilots are only as good as the pilots are who can control, manage and override the co pilot's work. And this scene is where the stewardess is and the doctor is next to the the inflatable copilot in the cockpit. And of course they have no idea how to fly a plane. You can imagine what can go wrong in this scenario. So unless you know what good is, you're just using a copilot, which is what we often quote as an eager but clueless assistant that still needs guidance and micromanagement. But large language models, they can do a lot of things. Oh, the pilot. I'm reading the questions as well. The pilot. There were two pilots and both of them are sick because they ate something bad and so on. They need a third pilot that they need to find to actually land the plane because that inflatable copilot can't land it. There are a lot of use cases which can be attacked with large language models, say in the translation space, it can be pre translation, which is for example, source content optimization. Or even predicting if using machine translation is meaningful or not predict the quality, do the machine translation itself or do post machine translation tasks such as automated post editing or quality assessment and evaluation. Of course, there are all sorts of other content operations that can be done. You can create content with it, rewrite it, summarize, remove gender bias or flag harmful content paraphrase, make it more customized. And you can also use large language models to do QA automated LQAs, or again quality estimations. You can also inject terminology, but you can also extract terminology. You can use it for engineering creating scripts, or use it for technical support as a chatbot. And there's a lot of linguistic context that can also be used with large language models too. So it looks like large language models can do a lot of things. And comes the question of if a tool can do things, do we really want it to be done by that tool? And the metaphor that I love using is that of the give a man a hammer and the man will find a nail everywhere. That's basically what the generic out of the box large language models are. They're hammers. And it's easy to find many nails, maybe shapes and forms, longer nails, shorter nails, this head, that head and all that, but they're still just hammers. Whereas for many tasks we actually need very specialized tools. That toolbox, there is other AI specialized large language models or maybe even small language models. And of course all the traditional tools that we have in our toolboxes, would that be just simple algorithms or automations? And the key question is, where is this heading? Is it going to be the largest large language models doing every single piece of work, or the smaller ones will be specialized for every given task? We don't know that the jury is out. That said, large language models, as I said, they can be customized, they can be trained, retrained. And on the left of the screen you see that's basically what big tech and the AI labs they do. They find training data and they pre train models on vast amounts of data on very large computers. And they create foundation models that are made available so that they can be adapted. So you can create adapted models for translation, for question answering, for text classification, and all sorts of other use cases. And actually this can be done by everyone. Essentially, language technologies to a certain degree became democratized and everybody from a translator to a CEO can use these tools to create value within the company, or at least to try and test an experiment and find out what's possible currently and what isn't. Let's move on to use cases, which is probably the biggest part that we need to talk about today. 10 tools were mentioned. I already mentioned more than 1,000 in our radar. Let's look at some specific ones. The key message here will be is that every idea is a potential use case and probably there is an AI tool that will help with that use case. But first of all, here's what the enterprise viewpoint of language technology is. Most of the very large enterprises, they say, hey, I have a contract with Google or Microsoft or Amazon, I can get all these technologies from there I can get the large language model, I can speech recognition engine, I can get a speech synthesizer and maybe even some NLP natural language processing tools as well. There's a lot more to it than that. But each of these platforms offer even their machine translation, their own machine translation technique technology too. What we also see is of course machine translation space is getting heavily disrupted by large language models, both in terms of depth and width, which you may or may not realize that's a pun. If it didn't really go through very well, then I apologize. For those who know the tools, they probably recognize that depth refers to DeepL and width refers to one of the new tools released by unbabel called Widen. And so some of the machine translation providers, just like DeepL and unbabel, they are integrating large language models into their technologies. And I recommend that you go ahead and try as many of them as you can. Translated tools, especially Lara, also recently announced in the last few weeks. DeepL is hopefully known to most of you, but Lilt globally, which was acquired by Memoq recently, or unbabel's WIDEN tool, WIDEN AI. Go ahead, try them, test them. They're very interesting tools on their own, especially LARA and Widen because they, the way they created the public interface even is that you can, just as when you go to ChatGPT, you can, you can ask a question, you can prompt it, you can guide the model. This is exactly the interface that is enabled by large language models. In your natural language, you can tell the model what you would like it to do. With the translation, instead of the traditional, say Google Translate, you input a sequence of words and you get back a sequence of words. Now you can give a direction to the models, the translation models, and they're really amazing to see how the response changes. For example, lara, even at the end of the translation, gives a little feedback in natural language, in human language to say this is how likely that the translation is good or how good it is, how it reflects to the original source and all of that very interesting progress in language technologies and machine translation that you can actually use your own language to get the output that you need. Similarly, if you're familiar with them, some of the empty machine translation aggregators, those companies that pull together different engines and make them available for individuals, other technology platforms, as well as of course, service providers such as Indento, Custom nt, Blackbird or even Frase, they have inserted large language models into their machine translation offerings or even offer adjacent technology services and use cases such as automated post editing or quality estimation and similar next to it, of course, the translation management systems. And if you're a translator, you're probably working in not just one, but more translation management systems and specifically in their CAT tools. And they're all being injected with new AI, which is great because at least you don't have to go out to a third party tool and you say ChatGPT, you can do it directly in the tool if it's provided by the specific client and account that you work for. So on the automated translation side, sure, you can use large language models, ChatGPT or anthropics, Claude or even Gemini, you can use them for machine translation. You can also inject glossaries. You can potentially use them to repair fuzzy matches or adapt to style or formality, remove bias and similar. On the creative side of things, you can use it for rewriting, for shortening, paraphrasing or providing alternative translations. You can also prompt it in some of the translation management systems to optimize for search engine optimization. On the quality side, at the end of the pipeline, you can do automated LQAs. There could be some content filtering for biased or harmful language. Again, quality estimation is really important when it comes to reducing costs. And we'll get there in two slides from here. And other use cases are also being integrated into CAT tools and translation management systems. Next to it, of course, there is a different view that we created. I think I'm going to move on from this slide. Just to say there are different approaches. So depending on which tms you work in, you will find different use cases and different experience with these large language models. And on the machine translation front, separately, there's something called machine translation quality estimation, which some of you will not really be working with or not facing. But let's recognize this is one of those use cases and one of those tools and features that was developed specifically to reduce effort. Which is to say if a machine translation engine is used, there is. Previously there was no way of guessing, estimating or understanding how good that translation is. Does it really need to be post edited? Does it need to be looked at by a quality person? Does it need to be revised? Now MTQE can help with that and create a risk assessment of separating out those segments created by MT that are very very likely good enough and good enough as I know that it's a relative term what good enough is that does not need to be touched by a human. And what are the parts of the content that actually needs to be thoroughly post edit and revised? And there are tools from Unbabel, xtm, Taos, Transifex model and a lot more. And actually they're available probably by now for almost all if not all translation management systems as well as I need to give a shout to Konstantin Dranch who with Custom Empty they created a comparison tool of machine translation quality estimations which is a very interesting use case to say. Now you can ask a machine to evaluate how good other machines are that evaluate other machines how good they are. It gets compounded. It's a very interesting use case. We're looking forward to how that's evolving as well. But of course it's not just technology providers that offer similar services. If you use any of these then you probably know the name. If you don't use them through your language service provider then you may not be familiar with the names. I'm going to ask you if you use any of these from Evolve, Lea, Aurora, Opal or Stream AI. If you work in any of these, please drop a comment. Would be very happy to hear your experience with these tools. But language service providers are just as eager to be part of the technology change and integrate large language models and AI further into their workflows. That is typically used for their client developers, but of course they provide an interface for the translators and language talent in general to be able to do their work better. Hopefully, but certainly faster that will be contributing to their clients ability to go faster to the market and to engage prospects faster as well. Next to translation, of course it's not just translation that we talk about. There's something that we know of that is looming in the air which is called multilingual content creation or multilingual content generation. And there already have been plenty of tools. I'm sure you know GPT4 well, GPT2 was already released about four years ago and ever since then there have been a host of different companies creating content creation tools. You may have heard of companies like Jasper and similar and by now also language technology companies DeepL, Lilt, Rider or smartcat. They have integrated large language models into their tools to create content. These features are also typically available in the language service providers AI platforms as well. But the truth is this use case is also accessible and available for you. You can create the first draft of your email, you can create a first draft of your blog post, even multilingually. Essentially this circumnavigates or circumvents the entire localization. The create, translate and then publish workflow says oh, let's just create and publish. There's no need for translation because there's a multilingual element in these tools. Bear in mind, of course we know of stories where a fairly large language service provider lost a multimillion dollar business to well one of these content creation tools and the client came back a few months later and said sorry, our pilots were running really well, but in real production environment it is not that good. We still need you to check and verify if all this is correct. Essentially in some sense the creative content creation also is impacted by large language models. Similarly like translation, which is when we not only translate but we post edit machine translation output very similarly, the content creation space is impacted to say, yep, it's the AI content that needs to be amended, adjusted and validated. And of course these are mostly powered by large language models. And if you want to use a large language model, go right ahead, just go to OpenAI or to Anthropic, the makers of Claude. But you can also go to Google and use Gemini and these AI labs and big tech companies, they will be giving you access to their tools. My favorite way of using large language models personally is a platform called po. PO is not a large language model, it's rather a collection of large language models essentially with a single subscription. You can gain access to these LLMs, of course only through their chat interface and we'll get there in a moment. Why is that important? Why did I just start talking about the chat interface? Is there another interface? Of course there is the programmatic one which typically companies enterprises LSPs would be using large language models via but if you want to try large language models on your own, you may think of OpenAI's GPT, but there are a great host of other models out there, including those that may be already targeting a certain use case. They have been fine tuned for some special task or they may be actually fine tuned or retrained for a specific set of languages. And PO grants you access to a whole lot of these. And PO is not all inclusive. Hugging Face is another alternative which has a lot more models than PO does, and many of those models are open source and many of the models on hugging phase are even free of charge or licensable in certain ways. If you have some Python experience and you love playing around with technologies, I strongly recommend you look at Hugging face and find out what you can do even with a Google Colab notebook or Jupyter notebook and go wild with it. It's incredible the amount of variety that is available there. That said, I said we need to make a distinction between chat models and programmatic models when it comes to large language models. And as individuals we typically interact with the chat models. So chat is chat. The models that we use is ChatGPT or Claude directly through anthropic. They are trained for conversation. That's what they do. And I recommend that you are nice with your AI tool because they're just simply more likely to provide valuable and correct output to you. But also, unfortunately, the chat interface doesn't allow for a large level of control. There is something, for example settings like top case settings or temperature, which you cannot access in a chat based model like ChatGPT directly. But the good news is the chat history becomes part of the conversation. The prompt, it's called incremental prompting, which means whenever you continue the conversation, whatever has been said before is part of the context of the conversation. With the chat model, you can of course always reset your chat, or you can jump back and forth between chats if you want to with different models and see how the responses differ between how you converse with them, how you use your prompts, how you phrase your request, or how you provide feedback to the model. API prompting, which is the programmatic way of doing it, is a way of having one machine communicate with another machine. This is how a translation management system talks to a large language model, asks for an input and asks for an output and gets the output and displays it on its interface. Those are more instruction following models. They can be heavily automated. Of course there's a very high level of control, or a rather high level of control, but there's no chat history. You need to put into your prompt everything that you need upfront because you can't really follow up on it. In other words, if your prompt failed, you didn't get back the result that you were looking for. You have to redo the entire prompt. There is no way to say, oh, could you please correct this piece? That's the main difference between chat LLMs and programmatically used LLMs. One of them you can always go back and correct and challenge the LLM through the prompt as context and in programmatic way of using them. You have one shot, one attempt to use the model well, you need to include everything into your prompts. Would that be examples? Would that be instructions, what to do, what not to do, the format, everything else with it? Hopefully that's useful information because there are plenty of use cases for which you can use AI, not just in translation work. Of course, language work is where we mostly use AI. Let's just face it, machine translation is AI and we have been using it for a long time. But there are plenty of auxiliary AI use cases such as AI used in communications. Ultimately, most of the communication that we do with our clients because we're a remote industry is. Well, it's what it is, it's text based. And guess what? You can definitely use large language models to help you a lot. If you talk to a client or you talk to a vendor, depending on which side you're sitting on, you can use the LLM to rephrase your message to speak their language, for example, if you want. If you're speaking to a client side vendor manager or a client side project manager, or maybe a program manager or a director on the client side, you probably want to talk to them differently. There are different purposes, there are different lingos that need to be used, especially helpful if you're not a native speaker perhaps of the language of the client that you're working for. At the same time, these tools are often built into your productivity tools already. So in Microsoft Platform you can use Copilot to do some of this. In Google you can use some features that are powered by Gemini, or you can download the tool like Grammarly to be helping you with this. And there are plenty of other AI tools that are often based on again OpenAI or Google's AI platforms that will provide similar services. And you may have seen even some commercials from Apple where they advertise their new Apple AI. Brand new mobile phones where there's an employee sitting in the office and wants to say something rude to the boss and types the root thing into the mobile phone and click it says rephrase so that the boss understands that he's not being pleasant, but it's said in a polite way. Yes, it can even be used in those cases even for escalations. If you really want to just blow some steam off, type it into your AI and ask it to rephrase. You will feel good and the person that you're sending message to will not be offended. Meeting tools There are plenty of them that can be used for transcription or AI summarization. We of course at Nimsen use these regularly and basically all the time, unless there is a lack of consent from the other side to say we are not recording the entire session, the video, or sometimes not even the audio. But we do get an AI transcription tool running next to our meetings, especially the internal ones. And we can each of us go into those tools and ask for different summaries or different excerpts, ask for a list of action items or similar. And there are a myriad of options out there that you can integrate with your favorite email or collaboration platform. And that could be Tactic or Fixer Rev OR assembly or ChatGPT for transcription. There are a lot of them and I suggest that you kind of try a few of them before you make a subscription. There could be different things that you're looking for personally where you would find these tools useful in your day to day productivity. You can also use many AI tools for kind of documentation or data analysis too. Would it be, you know, sheets or charts even including information coming from there so you don't have to read through the entire 20 page, but you get a quick notion of do I want to read through the whole thing? We use this in research, but I believe that for you as translators or interpreters or other language professionals, it's equally useful when you go and look for a. Would it be a research paper or. Or would it be a course that is provided or would that be an assessment or a report on something of interest in the language world? And you can use any of the AI tools for this again, ChatGPT or Claude from Anthropic. I personally use typically po.com for this because. And in the top right I noted there. Let's not forget about things like data security, confidentiality as well as bias. And the first two are incredibly important. I use these tools for say for documentation or data analysis if I know that the content that I'm looking at is not confidential. So if I download a report from the web, I know that I can feed that into Claude even through po.com because it's already on the web. It probably already has been read that AI tool in the first place. I'm not breaching any agreements with any of my clients and I'm not putting anybody's personal or other data at risk in day to day work. Of course, most of you will be using the tools that are provided by your clients or you bought them yourselves. Would it be a translation management system or perhaps you're using teams or Slack or Notion or Asana for yourself or together with your clients or your teams. There are lots of use cases in addition to all of these, and those could be related to subtitling or even AI dubbing. And a lot of the media world is proliferated with AI solutions. I will not go into this primarily because of a brevity of time. That's one thing. But the other thing is there are just as many as there are machine translation tools. And to be very clear, we at nimc, we do not endorse any of these tools and capabilities. We list them and we advise our clients on which ones are the most operationally viable, or at least create a shortlist of those that are most likely to be working for the client. Before we jump into questions, I want to make some conclusive remarks. Closing thoughts rather. Number one, I think I already mentioned that of course we in the industry, we are very practiced with AI use. Especially if you are a translator, you must have been using word of Machine translation. If you're an interpreter, you're at least aware of tools such as Kudo or Interpret that do speech translation. You may have even used it in your personal life or you may have of course used a Google Translate to do speech translation for you. Those are not tools that are typically in very high stakes environments used. So say in the high value translation or an interpretation world, AI tools are viewed as a risk. But we as an industry, we have a lot of experience in identifying how much risk there is to use an AI tool. And we can advise each other and our clients to do the right thing in the first place. Because just because an AI tool can do something, do we really want it to do it? Because as I mentioned, if you give a man a hammer, I will try to hammer everything just as if it was a male. And you, us, we are the experts. We need to be able to advise our clients how to do this. And again, quick recap. Every idea is a potential use case, which is to say you can use an AI tool for almost everything to a certain degree, as long as you know what good is. As long as you can verify the output, as long as you can correct the output or find the person who can do that, and very importantly, so that it's worth your time, if you're using a tool that just makes you work more, drop it, find another one, or find another use case. But there's probably an AI for every use case that you can think of by now. And there will be even more. So don't get married to specific platforms, because they will be changing over time, they're evolving, they're developing, and I strongly recommend that you try as many as you can over time. Very importantly, it's incredibly challenging to stay up to date on the development of language technologies. We are a research company, we do this professionally. But especially if you're on your own, it's massively challenging to understand what is happening, what are the new things that you should be trying out and what are the nons that you should stay away from. So sharing experience with each other in different forums is critically important, and not just once a year, but regularly. Because, let's face it, the impact of AI is practically already here. Language service providers, they're using it, clients are using it, language technology providers, they're integrating the new wave AI into their platforms. And then we, as human professionals, we drive this change. We are the ones who ultimately use the outputs, we are the ones who guide the machines, we are the ones who create the training, the training content or the training material and the data. We are the ones who can validate the output. And what you really need to be successful in the new AI world, especially when it comes to the proliferation of platforms and choices, is to have an insatiable hunger to learn. Oh, and of course, a lot of time to try an experiment or to learn from others. I have one more closing thought, which is if a typical example of the first statement that as a user you don't need to know what's happening under the hood is your mobile phone. I don't need to know what's going in on my mobile phone, what kind of chips there are, what materials are being used, how long the wires are, any of that. I don't need to know how the screen is created, I don't need to know how the microphone specifically sends signals to the. I don't know where it sends it. But as a professional, that's not an option. So for us language professionals, knowing how language tools and language AI works, understanding their benefits and their risks is critically important for two reasons. One of them is because we are using them, and if you're not using them, very likely you will be using them very soon. And second, we need to be able to advise our customers and our clients if and when to use them to what risk and how to correct those risks and errors if and when they occur. Thank you so much for your attention. I'm looking forward to any of the questions and discussions, even Eddie. [00:44:28] Speaker A: Thank you so much, Laszlo, for your presentation. I've shared the initial images on LinkedIn and of course we have a good number of people with us on this presentation. Please feel free to add your questions as we'll take a few minutes to end the presentation, but before that as well. Mila, if you can project all of the claps. Yes, so there were many claps in there. So thank you. Kevin Laura I can't pronounce jobs in good Spanish. Well, yes, thank you. Katerina Yurik Khalid. My med Pharma. Why? My med pharma. My med pharmacy please. My med pharma. All right. Amir Abdullah Abdul Abdul says this is by far the best presentation I've attended on AI. Thank you so much. Quick questions. Do you have any specific use cases for AI in localization, particularly in video games? [00:45:39] Speaker B: That is a really good question. So most of the things that I've talked about, they refer to general localization. As soon as you go into the specifics, a specific segment or a specific client, or even a specific country or language or culture, yes, it becomes much more nuanced, which is again one of the reasons why we need to stay updated so we know what refers, what relates to our challenges or to the client challenges. Video games are awesome. I love playing video games myself and some of the things that are happening there are more related to object recognition and correction, e.g. lip movement of the artificially created creatures or characters in a game. But even last week I attended the conference where some of the challenges that were described by a game developer were well, actually the place where AI could be most useful are probably the most challenging for AI to solve because connecting individually with the gamers, with the players, or for example, keeping up with the speed of how the language, the lingo and the usage changes of these games. Especially with teenagers being I'm not a teenager, but there are great many teenagers who probably constitute the vast majority of the gaming community. Their way of using the tools, their language they use, they change it so fast that you can't really train the AI to do anything really with it in the time and scope that is needed. So even the technology would be amazing to help in those use cases. It's probably the least useful in those. But yes, there are cases where AI can be useful even in video games localization, for example, comparing different characters or scripts, or even if there is a change in the games in the script for say for the non playing characters, then you can use an AI to compare the scripts or creating kind of an adversary or a role playing AI that would be commenting back on you on the game or the script that is being developed to say where could it be improved, what are the linguistic challenges or where are the internationalization problems that could be coming up from the game? Hope that answers the question. There's a lot more to unpack there and I can't claim myself to be a game localization expert. Being aware of some of the challenges of AI and problems, some of the use cases is useful though. [00:48:15] Speaker A: Yeah, I just find it really fascinating how certain tools really allow you to do proper research and really amplify the effect of kind of like our curiosity. So it's really great how some models are able to aid in that kind of like even personal quest to get certain information. Gaming definitely can benefit from that. Lastly, there is a question from Serena. She says how do you deal with confidential information? Are any platforms safe? [00:48:49] Speaker B: No. Yes, the platform that you run on your own PC or mobile phone or whatever device you use. Of course. But the thing is that most of the time when we deal with as language professionals, when we deal with tools again we typically use our customers platform, say a language service providers platform or our direct customers. If you're an interpreter and doing direct interpreting, you're using the client's platform. In those cases, the liability for the confidentiality is typically not on you as a language professional, but the tool providers side is where the liability is. However, there are of course instances where you may want to take the text saved for translation out and put it into a system for doing machine translation so you can post, edit it or do a gist on it or whatever it may be. It becomes immediately tricky because as soon as you're sending the data off to a cloud provider, especially if you're not a paid subscriber, then you're probably just shared the data and the data is theirs. And if there was any confidential information, would it be personally identifiable or in worst cases similar like personal health information or somebody's credit card or Social Security number or something similar. That's a serious breach of any privacy statement or privacy regulation. It's up to the individual to make sure that this doesn't happen. For corporations, it's also not trivial. Even though many of the cloud providers would say oh yes, we're decertified SOC 2 certified and ISO and all the certifications breaches are possible to happen. Right? And it's up to every individual and company to assess how far they're willing to tolerate the risk of using AI and using cloud providers to run the AI. And we also know that there's a lot of regulation that is on the table it's coming. The EU AI Act. The US is also cooking something up. Something. Something may change. Of course, with the new administration coming in, there's a lot of challenge, or there are a lot of challenges that we still don't even know what they are. We know that they are there at least and probably there are a lot of unknown unknowns which will just surface in the next one or two years. To say, how do we really handle this confidential thing? Where does the content go? Does it get used for training a model? What happens with copyright issues? And what if I was the original translator and now my translation went into training the model? Then why didn't I get compensated for that? A lot of questions that everybody needs to chew through and answer. We'll see how that evolves as well. [00:51:24] Speaker A: Thank you, Laszlo. We have a bunch more questions, but we're about to start a new session. I'm going to entertain a few and talk to our next panelists over the internal chat. But Ekaterina, just Nikova, which is Nikova, says thank you for the presentation. Laszlo, have any recommendations or guidelines being developed for AI assisted content creation yet? Is it practiced on a wide scale in any areas? [00:51:53] Speaker B: That's a very good question. And we know of many instances where AI content creation is used, but that's typically English content creation, so it doesn't necessarily enter our multilingual world. Right. Then again, when it comes to creating multilingual content professionally, if the enterprise is just using Jasper or ChatGPT, that doesn't surface on anybody's radar. It only surfaces if it takes away business from a language service provider or a content creator copywriter. Professional guidelines? No, there aren't really. The challenge, of course, again becomes how do you train the model to be consistent with the brand? Even though every single API calls different API call, Do you need to train a separate model? Is it enough if you just keep prompting it the same way? Do you need to just fine tune it a little bit? There are a lot of technical challenges and there are different solutions to it. There are some good practices and not so good practices as well. Guidelines. I can't say that there are specific guidelines for doing this, but there are plenty of attempts. And again, the language service providers and technology providers are injecting these features into their platform and products. Certainly there's a future for it. Just as with machine translation, it took us a long time, quite a few years, to figure out how to make post editing actually work. What are the errors that we need to look for now with LLM translations, we have different errors that we need to catch. It's something similar for content creation and for the copy editors to say it's going to be completely different to edit an AI copy that a human copy. And again, we'll see how that's evolving in the content space altogether. We know that our challenges in the language industry are not dissimilar from theirs. [00:53:45] Speaker A: Right, Laszlo, and I'm going to do one more question. We will not do your question, Lili Nikolova, because she's asking if you recommend three empty providers you mentioned you don't endorse and it's very specific, so it depends. [00:53:59] Speaker B: I think there are some household names though that everybody kind of. [00:54:02] Speaker A: Okay, so. And let's go to Nestors. So Nestor talks about for improving QA in localization. Besides adding in country reviews to Gen AI empowered translation processes, what other approaches or methodologies would you recommend? [00:54:23] Speaker B: To be honest with you, by now, Automatic LQA is something that is being attempted. I don't think it's seriously being industrially implemented. It's one of those really challenging things. The main thing that we see is most of the attempts to integrate AI into say, translation workflows is mainly replicating the same workflows, just doing it faster or more efficiently. We're still waiting for those breakthroughs where AI will help to do things, will help us to do things differently. For example, instead of doing encounter review on the client side, how about an AI can simulate encounter review? Can you train a model to be a certain personality that can understand reasonably and understand. Sorry about that. It's Artificial intelligence can understand or replicate a potential response from an encounter reviewer or a user who would be otherwise testing the content. Right. Is this possible? In our understanding? Yes, it is. But it's kind of only meaningful if there's a scale where it can be put into operations, which is one of the major challenges in AI tools in the first place. Many of the conversations that I had in the recent conference season is people get very excited about AI because it can solve many of these little annoying things that everybody has in their day to day in their workflows, in their processes. But the thing is that none of those little things get attention on their own because they don't have business value to automate and to focus and to create a model for it. So you have to rely on somebody else creating a solution that will solve the problem for you. Or go ahead and experiment with the large language model in AI tool of your choice. Hopefully the presentation gave some guidance of what could be the top ones or the most relevant ones to start experimenting with and then the Radar has all the rest of them. That would satisfy your appetite for sure. [00:56:31] Speaker A: Excellent, Laszlo. Thank you so much. I really feel like exploring more of these tools. Even you are using them and then you all of a sudden think those are the only ones. There are new ones, they are competing now just like widgets on WordPress kind of plugins, something like that. It feels a little bit like that. But thank you so much Lazio Lazlo for your presentation. Like we said, this presentation is going to be available on YouTube but also online talent on the Lang Talent Podcast on Spotify. We will send everyone who has attended this event a newsletter including this information more information about about the future of work. Laszlo, thank you so much for joining us and I don't know if you have any final thoughts before we go. [00:57:14] Speaker B: I wanted to say just one thing other than thank you to Eddie and the multilingual team for organizing this great event and inviting me. Hopefully see you again next year, I guess in 2025. But I want to say that just as every idea is a potential use case, every individual who has language expertise has the potential and the possibility to contribute to the development of AI in ways that it can be useful for more people. Very often what we hear is individuals say, hey, you know, my provider gives me tools that I don't need. Well, take your thoughts to your provider. Go and tell your language technology or service provider what are the tools that would help you to make it faster, better, cheaper? Talk to your peers, your chancellor. There's interpreters at different events come production and operations by a technology provider, but make sure that your voice is heard. Participate in the discussions, bring your notions, bring your ideas, your use cases, your pains and challenges that need to be solved. [00:58:16] Speaker A: Excellent. Thank you Laszlo. You'll surely be back in 2025. We appreciate your insights like everyone who is commenting. Thank you Laszlo, Ekaterina Nestor, Nina Noelia from Madrid, Spain. So thank you so much.

Other Episodes

Episode 4

December 02, 2024 01:07:26
Episode Cover

Fostering Well-Being in Remote Language Work

As remote work becomes the norm, maintaining well-being is crucial for language professionals. In this panel, we’ll address the unique challenges and opportunities of...

Listen

Episode 1

November 22, 2024 00:55:21
Episode Cover

A Career in the Language Industry with Nathalie Greff Santa Maria

We bring you the keynote presentation from Nathalie Greff-Santamaria, a distinguished professional in the global language industry. Nathalie, a literary translator, conference interpreter, and...

Listen

Episode 5

February 18, 2025 00:17:36
Episode Cover

AvantPage’s Hiring Vision: An Intersection of Talent, Growth & Service

Nicole Spyt-James, VP of People & Finance at AvantPage Translations, shares how the company has grown from a small team to 120+ employees while...

Listen