S05E13: Evolution of Language Services with AI
Vincent is a visionary in the field and has extensive experience in the localization and publishing industry, and covers this topic based on today’s realities of our industry. He also shares what kind of evolution is at play, how AI is disrupting this industry from outside and inside, how it will impact LSPs of both large and small sizes, and how will the role of linguists will shift into something new.
I tend to view this as, like, this tech is to the language industry what articulated robots were to the automotive industry in the 70s. It completely changed the way the assembly line looked, and I think it's something similar for us, right? The order of operations will change. What we ask humans to do in the process will change and so on, which is why I think that the whole kind of articulated robots are a decent analogy.
Vincent Henderson
Topics Covered
Evolution of Language Services with AI
Intro
Hello and welcome to the Translation Company Talk, a weekly podcast show focusing on translation services and the language industry. The Translation Company Talk covers topics of interest for professionals engaged in the business of translation, localization, transcription, interpreting, and language technology. The Translation Company Talk is sponsored by Hybrid Lynx. Your host is Sultan Ghaznawi with today’s episode.
Sultan Ghaznawi
Hello and welcome to today’s episode of the Translation Company Talk podcast. We bring you a fascinating and interesting topic that is very timely and I have deliberately avoided speaking about it in the past because everyone else is speaking about it and there is too much confusion.
I’m referring to AI in the language industry and there is no better person to speak about this topic than Vincent Henderson. Vincent Henderson is the VP of Language AI Strategy at Lionbridge, the pioneer and leader in translation and localization services. Vincent leads strategy development and execution with a focus on adapting the global content business to take advantage of the historic opportunity presented by the advent of the AI-led economy.
Vincent initially joined Lionbridge to head the product development organization. His teams launched a language AI platform on which they created a suite of language products under the smart content umbrella. With the advent of mature LLMs, this platform now supports advanced LLM capabilities at scale, notably automated post-editing, but also automated terminology and translation memory management.
Prior to joining Lionbridge, Vincent was the head of AI product development at Wolters Kluwer’s Legal and Regulatory Division, one of the top three global professional publishing firms where he pioneered advanced technology for content enrichment and launched multiple knowledge products globally.
Vincent also has held several executive and operational leadership roles in his career, remaining focused on innovation and knowledge automation. Vincent, welcome to the Translation Company Talk podcast.
Vincent Henderson
Thanks for having me. Looking forward to the conversation.
Sultan Ghaznawi
Vincent, give us an introduction about yourself and share a few words about what you do.
Vincent Henderson
Yes. So, I work for Lionbridge. I’m the VP of strategy for our language business. Essentially Lionbridge has three main business units. One of them being AI data curation and AI training. So, that’s the business line. There’s a big games activity that we have in Lionbridge games localization, which is its own division.
Essentially the language and language services and language AI division that includes life sciences, interpretation services, and of course the language business which is the bulk of the business. So, my job is to kind of try and find to lead you know the development and execution of the strategy for the language AI business, in particular, the language services business.
My focus is on how to adapt our business to the historic opportunity of what the latest generation of AI brings to the table and basically the rise of the AI-led economy. So that’s my main focus, how do we leverage those opportunities?
Sultan Ghaznawi
As you said, it’s a pivotal time and localization with AI taking so many different forms and shapeshifting and so forth. Let’s go back to the fundamentals and the time when you started with this industry. When and how did you decide to join this industry? What were the motivating factors behind working in the language services sector?
Vincent Henderson
Yeah, it’s an interesting question. Basically, I’m an early gen-xer. So, I’m a child of the third industrial revolution. Pretty much like the software automation driven economy. I grew up exposed to Ataris and Commodores and the like, in the 80s, Sinclair’s and so on. Even like TO7 and so on.
I really hit it off with the IBM pc and when I was a young teenager, I kind of was very much exposed to that and so I developed fairly early, let’s say, historically, skills in that field. I very much remember the moment as a child, when I understood that we could tell machines what to do. That’s a thing that I really remember very vividly, and I was kind of utterly fascinated by that. So, I developed an affinity with this.
I’m also kind of an original high school dropout. I started work when I was very young. I didn’t really do higher studies until much later in life. I put it around for a while and then at some point, I was still a very young man, it became apparent to me fairly serendipitously, that on one hand my background with languages, because I have a very mixed kind of cultural background. My father is from New Zealand, my mother is from Mauritius, I was born in London, and I grew up in Paris.
So, I had this kind of multicultural and multi-language background, and at some point, as I was saying, I realized that the confluence between my, let’s say fairly at the time still quite rare skills, in being familiar with how to make computers do what you want them to do.
Language and multicultural approach, at some point it became obvious, the first time I was exposed to an opportunity with, what was at the time Lingua Tech, after having spent a few years doing software testing and things like that, I started working at Lingua Tech as a head of the multimedia division, a group at the time in France.
I haven’t really looked back since, in the sense that I have, because I did change industry at some point, but that choice that I made, to focus on language and computing was clearly the right fit for me.
Sultan Ghaznawi
Thank you for that introduction. In your experience Vincent. What are some of the transformative and evolutionary changes that our industry has undergone during your career? I mean, technology is obviously going to be at the forefront, but what else have you noticed, even on the technology side?
Vincent Henderson
Yeah, so as I said, as a gen-xer, I’ve been around since like, let’s say the early 90s, or the mid 90s, in this field and technology is a key part of it. I think that it’s not possible to talk about the evolution of probably any sector, but in particular this one, without mentioning technology.
I kind of remember in the 90s, implementing what must have been maybe one of the very early deployments of a batch automated grammar and spell checker, at the level of the company, where every morning linguists would come in and get their spell check reports and kind of go through their content before they delivered it and so on.
So, very initial automated QA stuff then and I remember also being very interested at the time and understanding how technology changes process. For me that’s a fundamental kind of insight, which is that technology is not just a thing that you know makes things faster or whatever, it’s a thing that in order to implement it, you have to change your process.
If you want to get the benefits of technology, you have to change your process, and we’ll very likely get back to that in the rest of this conversation.
I also have very vivid memories of the whole discussion about the fuzzy grade with the translation community, right when translation memories came onto the stage and the question of fuzzy matching and how did that impact the translator productivity and so on.
So, there was another thing where technology kind of severely impacted some fundamentals of this industry, and this generated business discussions with the translator community with the clients. How do we kind of incorporate this technology into the business model of our industry and all of these early exposures to these types of things, kind of really formed the way I thought about technology.
Of course, not to kick open doors, as we say in French, but you know the internet which kind of heralded content, as one of the main drivers of the economy. This was not obvious before, now it seems pretty obvious. A lot of business is about writing content.
It was not obvious in the mid-90s, but this became a thing as the internet came online, as well as therefore enabling the globalization of business in general. Not only globalization of the customer base but also globalization of the localization industry itself, with all of the consolidations.
So, I worked for one of the precursors of Global Solutions in the 90s, then Bound Global Solutions, which was then purchased by Lionbridge and so on. So, this whole cycle of investment consolidation and so forth, to create fairly large translation companies or globalization companies, is something that I think is the main thing.
We’re now in the kind of fourth industrial revolution, with very powerful Ais, especially when it comes to content and language that are coming onto the stage. I think the way to look at transformation is through the lens of leverage. It’s like, when you have a new innovation, like some new technology or something, I think that the question to ask yourself, is what leverage does it provide me? What is the thing that I can move, that I can lift, that I couldn’t lift before without that technology.
I think that modern AI’s make a big difference. So, for me there’s a fairly continuous path there. I would say, in addition to that, I did work for about 15 years or so, in the professional publishing business. I mentioned before we started recording that I started in localization in the 90s and in the mid-2000s, I shifted to professional publishing. I had opportunities, I took them up and so forth and there I discovered the side of the world that is producing content, not translating content, but producing content.
So, I work for Walters Kluwer, which is a very large global publisher. I had similar kinds of roles of leading innovation and implementation of AI and technology, in the content creation business and that’s where I kind of really did a lot of my initial AI, applied to content, discovery and research and things like that.
From the late 2000s onwards, classification, use cases, very much like the search question and that is obviously a big question in the professional publishing world and it’s interesting because there’s in fact a loop now. Coming back, the search problem was really one that was brought up in the 90s and kind of further developed in the 2000s and pretty much got to a point where it became, not exactly a solved problem, but a by and large solved problem in the 2010s, how do I find information in a corpus of text and obviously, you know, Google and so on played a huge part in that.
What I find interesting now is that with some of the AI that comes in where we have this great natural language understanding and production capability, if you start thinking about retrieval augmented generation and the like, the retrieval augmented part is itself a search problem, right? Because you have to figure out what is the part of the information that you need to retrieve.
I find it really interesting that you have these two, you know, fairly distinct problem sets that you had originally with search on one hand and kind of language understanding on the other, that are coming together to provide new opportunities now. I’m fortunate, I would say, that I’ve been very deeply exposed to both.
Sultan Ghaznawi
I remember some of these changes in the industry as well as what we were undergoing, for example, with search. Everyone was thinking about it and today, even in my company, RAG or retrieval augmented generation, that’s something that we are very deeply and passionately looking at. For me personally, that’s an area of deep interest.
Let’s get to the core of this discussion. I’ve invited you to speak about AI and how it is bringing a new dimension to the language services industry, in particular for enterprise. I’m interested to hear from you what the current state of affairs looks like in localization and AI.
It seems like people are doing their own thing. There is not really a consensus. It’s just like there was one long time ago when it came to CAT tools, for example. We started adopting them unofficially and they became an official thing. Where do you see things standing today?
Vincent Henderson
Yes, I think you’re setting up the conversation quite accurately. So that’s the bulk, really, of the question, right? So, first of all, our view, my view, obviously, but our view at Lionbridge is that AI is a massive opportunity for the language industry. That’s one thing, right? It increases dramatically the leverage. I was talking about the leverage that humans have on the production and conception of content, of information and the like. Because this new generation of AI, like what we call Gen AI, even though people get really, let’s say, excited about the video and image generation and so on, this new generation is in fact about content in general, right? And it’s about language.
Previous generations of AI were about other things, like they were more about things that have to do with metadata and annotation and things like that, right? Things that were really largely overdetermined by the classification problem. Whereas this new generation of Gen AI, as we call it, is quite different. And it’s a core kind of natural language understanding type of technology.
So, there’s another thing which I think is really interesting, you know, as setting the ground for my answer, is that pre, you know, 2020s, basically, NLP, natural language processing technology, was only really any good in English, right? Anytime, you know, kind of, right? There were exceptions, also kind of French and a few other languages. But by and large, it was a very English focused kind of tech stack.
So if you wanted to do, if you exclude machine translation, which is, I think, a completely separate problem, as soon as you wanted to do something with NLP and, let’s say, AI, in non-English languages, you practically had to reinvent the whole kind of NLP stack for that language that you cared about. And, you know, for your own use case, you would use regexes and indexing and maybe some Naive Bayes stuff and maybe some XGBoost or whatever, if you were really fancy.
It was quite hard to do NLP programs, in particular, outside of English. There’s a lot of investment for some fairly limited use cases. This new generation of AI, coming with the pre-trained transformers and the like on massive amounts of content, we now have access to real natural language understanding capabilities. That is a big game changer, in particular, for the best models in many languages.
You know, performance will vary. Your mileage may vary, as they say, and so on. But it’s still there, right? And that’s, for me, that’s the big shift between, like, previous AI and new AI, if you want to put it this way. So, now that I’ve set this up, I think that with respect to the state of play in the localization industry, the way I look at it is that I think that the level of maturity in adoption of modern Gen AI in the localization industry, in the LSP industry, is very skewed, right? I think there’s no homogeneity, as you pointed out.
So, the reason for that, I’m going to try and sort of analyze why I think that is, right? So, I think the first reason for that is that, contrary to popular belief, using LLMs is very hard, right? They look like they’re very simple because you can just talk to them, and they’ll say things. Oh, my God, it feels like they’ve understood everything I needed.
The reality is that if you want to scale LLMs as a production capability and you want to do that reliably, that’s very hard. Very few players, I think, have the development sophistication to build the kind of technology that works using this kind of AI. And, you know, we can get into why it’s hard and things like that. But, you know, obviously, we don’t have, like, many hours ahead of us. That’s a whole different conversation, maybe.
So that’s the first thing. It’s hard. Second, you have the fact that this technology will change everything in the language industry. For me, there is no question about that. And most organizations are not good at change, right? Change is a difficult thing because, like all previous automations, and I hinted at that earlier, real adoption of Gen AI in the language and the global content process basically requires that your business process change. I think quite dramatically.
And, you know, I tend to view this as, like, this tech is to the language industry what articulated robots were to the automotive industry in the 70s. It completely changed the way the assembly line looked, and I think it’s something similar for us, right? The order of operations will change. What we ask humans to do in the process will change and so on, which is why I think that the whole kind of articulated robots are a decent analogy.
And basically, given the diversity of vendors on the market, right, even though we often talk about, like, the same kind of three, four, five companies’ names that keep coming up, there’s, you know, hundreds and hundreds of additional kinds of smaller vendors. It’s a very wide and broad and fairly vibrant community. But I think that it’s safe to say, given the challenges posed by AI, that most companies will not succeed, you know, from the largest to the smaller. I think most companies will not succeed in making that operational leap.
Of course, we’re setting ourselves up, I should say, to be one of those that do. As far as the state of play is concerned, what I’m observing makes me think, and it’s something that we’ve spent a lot of time looking at, the diversity of LLMs and Gen AIs, but let’s say just LLMs for now, there’s a very large diversity of LLMs.
The economic models associated with the different LLMs also vary a lot and they create competing incentives. That’s really interesting when you have that. So, the question is, should I go for limited capability models that are open source? So therefore, it’s kind of fairly easy for me to kind of use them for a very competitive, you know, they’re basically cheap or free. And then use that and fine tune them for my own use case, kind of going through each step and use case that I want and fine tuning all of that, in every language
I’m hoping that basically I can bridge the capability gap from open-source models, which are by and large, open-source models are less capable than the higher proprietary model. So, do I hope that I can bridge the capability gap between those open-source models for the very specific use cases that I’m using it for? Like I’m going to tune, let’s say an open-source model to do a very specific post-editing task, for example. And so maybe that model is not all that capable in the real, you know, in the big scheme of things, but maybe if I fine tune it to do a very specific type of post-editing in a particular language for a particular type of content, maybe you’ll do a great job at that.
So that’s one option. The other option is, or should I say the other side of the spectrum, because you can obviously do a lot of mixing and matching, is do I go for high capability models, which are very expensive, but that give me a kind of a task and language coverage out of the box that I can leverage so that I can cover more and more use cases, you know, faster? So, it’ll cost me more money, but I will be able to cover more use cases faster and so on.
I think that the spectrum of options there is really, you know, really interesting. There’s advantages and drawbacks in both cases, you know, without going into the details, I think they’re quite obvious. But, yeah, based on the intelligence that we have, different vendors have different approaches to that, right? Right. Some of them are going more towards the open source. Some of them are going more towards the proprietary, more expensive and capable models.
These, I think, really represent different strategies to approach this particular revolution. I would say as a fourth item in terms of the state of play and the difficulties that, you know, that are coming onto the stage with respect to AI and localization is the question of speed of change. This is something that people talk about a lot. I don’t want to spend too much time on that, but the pace at which these technologies evolve and arrive, like there’s new models all the time.
There’s always the rumor of the next model that’s coming in two months and things like that. So, the breathtaking pace of innovation in this in this field since the emergence of it is, you know, it’s quite amazing. Right. I actually believe that for the most capable models like, you know, GPT-4, Code 3, Opus or 3.5 Sonnet and the like, and maybe Lama three, I believe that we’ve reached a plateau there of language capability per se.
I think that the capability to better understand content and better produce, you know, content and so on. I think that we’ve reached a plateau. I think that we’ve reached a point probably of diminishing returns just for this. Right. There are other use cases where that may not be the case. But I think the key challenge of these models now is the question of cost and speed and scale to make these models more economically accessible.
Of course, everybody knows that GPT-4 introduced GPT-4 O recently, and GPT-4 mini even more recently, and the specificity of these models is that they are models that consume far less resources in the case of mini, far less, and therefore are much cheaper and also faster and so on.
So, I think slashing the cost of tokens, and the inference time is really what these companies need to focus on now to counter this kind of GPT-4 O or for mini. Claude has just announced, like, the prompt caching stuff which, for intents and purposes amounts to being able to reuse some of your prompt tokens for free kind of thing.
So that increases speed and decreases costs, which I think is a play for them to counter the GPT 4 and mini, in terms of economic feasibility, as well as some of the open-source models like with Llama 3 but very likely Maestro at some point, are starting to become very, you know, to catch up and to be very, very promising.
So, it keeps us on our toes. The changes that we have to keep tracking and you have to always make decisions about something or other about do we try to this model? Do we adopt it? Do we test it, you know, based on something that we already have that already works and so on. So that’s kind of really interesting. But it will, that’s where the disruption will come.
I want to finish on one item, which I think is really critical, which is the question of trust, what we call trust. It’s kind of related to the question of speed of change. One of the things that I find always important to keep in my mind is that LLMs are machines that make decisions about what they output, right? They’re not like NMTs. NMTs are one-trick ponies, right? They do something relatively well, but they basically segment in, segment out, right? That’s what an NMT does. It’s a one-trick pony, which is optimized to the hilt to try and kind of get that segment out kind of right, given that segment in.
The thing about NMTs, right, is that when you get a new model, a new version, an improved version of some existing NMT model, you can feel reasonably confident, you know, you might still want to test it and so on, but you can feel reasonably confident that it’s basically going to give you a better blur score or whatever, a better comment score or something, because that’s what it’s optimized on, right? You know that a new version of a model is optimized on these metrics, and that’s it, right? And these are metrics that are important to your post-edit workflow.
LLMs are not like that. LLMs are not optimized for one specific output. They’re optimized on the contrary. The whole point of LLMs is that they’re optimized for generalized thinking and processing and things like that.
So that means that when a new version of a model comes out, which may be touted as better, smarter, blah, blah, blah, and faster and whatever, you can’t just assume that whatever you have working today with existing LLMs that you’re using in your production is going to work for this new model.
This new model might have been optimized differently to reach a better generalizable capability, which means that maybe the prompts and the sequence of prompts and the different pipelines that you’ve implemented, maybe they’re not going to respond so well with this new model, which is going to expect maybe a different way of presenting the problem or something, right?
So, I think that this kind of question of how we trust new things that are coming out is not just a question of adopting the new better thing. It’s just that even the new better thing, adopting it requires you to do almost the same number of things, almost, right? It’s not that much, but at least a lot of validation work that you had to do for every previous step also.
I think that puts a lot of strain on the NSP organizations. You need to have a very strong backbone to be able to weather that and to kind of do the testing and have very smart, strong leadership, creative, reactive, smart teams of engineers, language specialists, and so on, who work very hard to keep up with this, make the right decisions, arbitrate between the different options and things like that.
You know, what I’ve just described is a fairly, challenging environment and that’s why I think that a lot of companies will be challenged by that. And I would say I feel very lucky to be at Lionbridge because I feel like we have that, right? We have strong leadership, smart, creative people, and the commitment to make this work and to participate in this localization revolution.
Sponsor
This podcast is made possible with sponsorship from Hybrid Lynx, a human in the loop provider of translation and data collection services for healthcare, education, legal, and government sectors. Visit HybridLynx.com to learn more.
Sultan Ghaznawi
That’s certainly how you described it. The complexities make it exciting. And having the right people and the right attitude is what will get you through this. I mean, sometimes I look at this and I compare our industry to back in the early 1900s when, you know, combustion engines were becoming more popular and mass market.
Can you imagine Henry Ford going to mechanics that were repairing horse carriages and asking them, can you now repair these cars that we’re building? The reaction must have been, you know, no, we don’t want to do that. Are you crazy? For hundreds of years, we’ve been repairing horse carriages.
So, I think we are at that pivotal moment in our industry as well as language service providers. How do we do this? Do we continue providing the old school traditional way of translation services? Or do we want to work with technology? Again, the language industry is not new to AI, Vincent, as you just mentioned. In fact, the very first AI was created to perform translation almost a century ago.
Soon it will be, I guess. And machine translation has been around for a long time. In the past two decades, we’ve seen incredible advances in this area, as you pointed out. In your opinion, what are we offering new today to our clients to help them be more efficient and improve their time to market for their products through our AI capabilities?
Vincent Henderson
Yeah. Indeed, that’s true that language has always been, like, the core use case of AI, really, when you think about it. You know, Eliza, I think, if I’m not mixing up my history, but, you know, one of the first computer programs that was created to interact with people was also about talking, right?
It’s about having conversations and so forth and the like. So, language is at the heart of, or I should say, is almost like the grail of computing, right? Getting computers to understand language has been what computing groups and the computing adventure has been about, basically, since the beginning.
So, 100% agree with them. And so, therefore, we’re not new to AI either as an industry. Having said that, I think that there is a very significant difference this time, right? I’m typically very wary of people who say this time it’s different, but in this case, I’ll just join that crowd.
The thing is that LLMs and new-gen AIs are not the new generation of machine translation. That’s a big misconception that people have. They think, oh, well, it’s kind of the new generation of, like, this kind of language stuff, so, therefore, it’s like the new NMT. It’s not. I don’t think it is.
We have a lot of data nowadays, and I’m sure other people do, but we certainly do internally, that shows us that if you take an LLM and you give it a segment, like the way we do localization, because that’s another thing about context and localization, but you take a segment. That’s how an NMT works, right? An NMT works by receiving a segment, translating it into some kind of target in some language.
If you do that with an LLM, you say, here’s a segment. Translate it to language X, and it’s going to output it. LLMs don’t do a better job than NMT. A good NMT does just as good a job as an LLM does if that’s the task that you give it to do. That’s what I was saying about NMTs being one-trick ponies, right? There’s only one thing that NMTs know how to do, right? Like other companies, like, you know, companies like Depot and so on have tried to build kind of other features and so on, and, of course, it’s not that simple. But still, nonetheless, it’s mostly about segment in, segment out.
An LLM doesn’t do better if you do actual evaluations the same way that you evaluate an NMT by just asking an LLM, here’s a segment translated. You get pretty much the same result. In fact, a highly trained machine translation NMT on some specific content types and so on and for a specific customer will do a better job than your LLM there.
So LLMs are not the new generation of NMT. What LLMs are, they’re the new generation of post-editors, reviewers, QA experts, engineers. That’s what LLMs are, right? And I think that’s the key mindset if you want to understand what this technology is changing to our industry. It’s not something that’s there to replace NMT. It’s something that’s there to replace everything else after NMT.
Let me expand a little bit on that. This kind of notion of context and kind of one-trick ponies and what LLMs really are. Basically, for an LLM to be useful in the localization workflow, what it needs is the right context for what you’re asking it to process, to review, or even to translate, maybe in some cases for sure. You need context and you need instructions.
Context and instructions are precisely what you cannot give an NMT, giving context and instructions or the equivalent, the proxy for context and instructions to a NMT is basically training the NMT. That’s how you do it, right? An LLM, on the other hand, can receive all kinds of arbitrary inputs, provided you do it smartly, to adapt the work that it’s going to do to this context and instructions and the like.
That’s precisely what humans bring up until today, to the table. Basically, the localization industry, right, is based on the following process, which is I have NMT, I have pre-translation using NMT, that is then reviewed by a human linguist, who we assume has, context. They’ve read the style guide, they understand the glossary and all this kind of stuff, and we expect that the human is going to review that and kind of understand this stuff and make the right decision, right?
Another thing that a human has, which I think is often the most underestimated contribution of linguists in the localization process is common sense, right? Which is obviously a thing that NMTs don’t have. Humans with common sense can tell literally kind of tautologically, obviously, what makes sense. They can tell whether a glossary entry, which looks like it might apply here, whether it does or not, because we know that glossaries are not necessarily deterministic instructions.
They can feel the style guide, they can feel the tone of the content, they can infer the intent of the content because they understand what the product is about and things like that. There’s a whole bunch of things like that where human linguists apply common sense based on some heuristics that they have in their head to arbitrage ambiguous semantics and the like and make these kinds of decisions.
They, of course, don’t always do it perfectly. We all know humans are fallible and so on. But they do that a lot and it’s a big part of what they bring to the table today in terms of the translation question, something that a lot of people who don’t understand this, well, as I was going to say, a lot of people who don’t understand it don’t understand. Right. But you see what I mean?
Like people who are not well versed in these problems typically don’t get that, right? They don’t really understand what is the piece of value that is added by humans in the value chain of translation. So, all of this was up until very recently, the irreducible contribution of human in the loop, you know, in translation related problems.
For me, this is what’s changing. Is that this injection of understanding the context, drawing on existing information to kind of make the right decision here to interpret this ambiguity and arbitrage, you know, make a decision on it going forward. This is what LLMs can do, right?
Now, LLMs are fairly finicky technology. Everybody knows about hallucinations and things like that. So, to get them to do that and to apply this kind of common sense, so to speak, and understanding of the context and instructions and the like and expectations, getting them to really take that into account to then process the text going forward. That’s the hard part.
So, the role of the human and I think this is where I’m getting to the point of your question. I’m sorry for the very long introduction to it, but what it introduces into the market is on one hand, a new place for humans in the workflow and a new way to automate the question of, does this text makes sense, is it an accurate representation of the intent of the original text, does it fit within the context of what it’s doing and so on, does it make sense ultimately is what is brought to the table.
So, then this is where the question of what does it bring to our customers? What it brings to our customers, I think, is fairly straightforward. It’s faster and cheaper localization for a better outcome.
Another thing that it brings to the table, and I’ve just mentioned the word outcome, so doing this is hard. I just mentioned this before, I said we could have a whole other podcast about the nitty-gritty of implementing LLMs and Gen AI in localization all of the details all those steps and so on fascinating topic.
What it brings to the customers is basically cheaper and faster localization provided that they are they understand that they need to do it with the kind of expert teams that I was talking about before like smart, reactive, creative, engineers, language specialists, language engineers, leaders and so on, who will build the technology to do that because this deceptiveness of ease of use of LLMs because after all, they understand what we say, is deceptive.
I think that this is something that we need to understand, clarify and make sure that we are very clear about the value that we are bringing to the table in making LLMs a thing in the translation industry. So having said that, I’m just getting back to some thoughts that I had a minute ago which is the outcomes question.
I’m sorry, I’ve been through so much to answer this question because I think that’s the main point. I typically tend to finish by my main point which I think is something I should learn not to do so much.
The main thing is that, what matters in content for people who produce content and as a person who’s worked 15 years in the professional publishing industry, that’s definitely some kind of insight that I feel I am bringing with particular legitimacy and weight to the localization business, is the understanding that the only thing that matters in content for people who produce content, is what the content does.
What does it actually accomplish in the real world for my business and so, one way of putting it that people use is, content has a job to do. The only thing that matters is whether that content is doing its job.
What matters is not whether a QA linguist thinks that the flow of the sentence might be better if it was worded differently. What matters is maybe not even whether the glossary was properly applied here because this glossary is kind of like a nice vetted linguistic asset and things like that.
The only thing that matters is whether the content does its job. So, what LLMs really make possible is that they open something that has been a very aspired to capability in the localization industry, which has sometimes been referred to as transcreation and things like that but it’s basically matching the target content to the job that it’s supposed to do.
This is not very extensively done because it’s a very expensive endeavor but mostly it’s not really done that much because even when it’s done and it’s expensive to do, transcreation, to modify the target to better fit the target market and so on. Even when it’s done, there is no empirical data that tells you whether what you did is in fact useful at all. It may be pleasing to some people who review content, but did it really make a difference to the business?
So, because you can’t measure that then you don’t do it. Obviously, it’s been a while now since businesses have data on the performance of their content, whether it’s engagement, click-through rates, funnel values and all the like. They have all of this data about how their content performs.
We now have the cost-effective ability to build services that allow for the ingestion of this performance data, we would be able to then build an AI solution, using LLMs and these smart techniques and so on, to then modify the content to better reach the goals that it has and then you can have automated A B testing, for example, where you generate two versions of the page and you kind of do A B testing and you figure out which one works best.
That will also be something that can help you meet search term drift in different markets and so on. Once you see that, hey why is my page in Brazil, performing so much better than that same page in Germany. It may be a multi-multi-factorial thing that has to do with the markets and things that are not related to the content itself but they’re very likely, maybe something that has to do with the way it was translated.
Maybe in my German translation not used since two years ago, but I’ve never noticed the right terms because the terms that people use to search for information have drifted. There’s been new kinds of idiosyncratic ways of saying things that I’m not taking into account in my content, for example. Whereas in Brazil, for some reason, it is really up to date, and it does well or maybe in Brazil my page is using some kind of engaging vernacular that gets people to click on the button whereas in Germany, I’m using some extremely formal thing which people can find boring today.
So LLMs allow our customers to experiment, now that we have the data to experiment, with their target translations, get the data and then feed that knowledge back into the customer’s Ais, that would be used in their production workflows, which would increase and improve their localization.
So not only is it in conclusion, not only are these technologies helping us reduce the cost of translation, because it’s going to be faster cheaper and so on and because we can make it reliable and overcome all the difficulties of scaling LLMs at millions of word scale. That’s one thing.
The second thing is going to allow our customers to tailor their content better and improve the performance of their content in their target market, which is the only thing that really matters.
Sultan Ghaznawi
The point that you mentioned is that humans brought before some degree of context, understanding, retaining that context. I mean, you probably remember how you felt when a child was born even decades ago. LLMs are trying to accomplish that and it’s the intuition of the human that makes it irreplaceable. Even today, LLMs don’t have that.
So that’s where our linguists bring value. There is a certain degree of uncertainty among people in the language sector today. They think about obsoletion, they think about where things are going in the future. They don’t even know what to expect next. As you mentioned, the rate of change is so fast that you know there’s no certainty or predictability today available to anybody. I heard that LSPs are seeing volume reduction and so on, when it comes to the work that they did before. What is your take on the impact of AI on this sector and practical terms?
Vincent Henderson
Yes, so with what LLMs bring to the table. For sure, absolutely there will be less words reviewed by humans. I think that’s probably uncontroversial. The question is whether it’s a less proportion of words or less words in total. You know, the advent of technology has created a curve in content, where even though more and more content was dealt with NMTs and things like that, and all kinds of automation technologies, this kind of efficiency in producing content always kind of kept the curve of the amount of content available that is out there to review, translate, post edit, edit whatever was always increasing.
I think that we’re reaching a point where every curve that goes up exponentially keeps going exponentially. That’s a fact of the world. At some point you reach some kind of plateau.
It will be reached whether it’s already the case or not. If you think about the dead internet, theories and things like that, there is a point at which the question occurs, is my problem that I don’t have enough content out there or is my problem that maybe I don’t have the right content or it doesn’t perform well enough and things like.
I believe that, yes, the number of words that will be reviewed by humans for sure will start reducing at some point or plateau for sure and very likely reduce, irrespective of whether there is in total more content or not. As LLMs kind of become more predictable and not so much LLMs but the LLM pipelines and the LLM kind of the whole tech stack around LLMs that we’re building, becomes more reliable and predictable and so on.
The main challenge that we’re going to have is, humans will monitor the system, and what’s going to happen. So, I think that the impact of AI is mostly going to be that the new role of humans is going to be critical. It’s going to be about describing the task at hand properly so that an LLM can actually do it right and you’re going to have to break the task down into a lot of little tasks and things like that.
There’s a lot of architecture that needs to be done to get this right. We have a lot of data around that. The humans will have to understand the errors that the LLM makes and develop intuitions about why they’re making these errors and how to prevent them. Should I add an extra step, should I do some preparation work, should I do some post-processing to catch these mistakes and then fix them? All of this kind of stuff, this is where humans will be the crucial contributor.
So as far as the impact on the sector, that is a big part of what it’s going to be. Now with respect to business models, because I think ultimately that’s maybe the question that you’re asking, which is volume reductions and so on, that you mentioned. I think this will prompt a change in the business models. Not just because volumes are going down but because volumes themselves are not going down. As I said, it’s mostly the fact that there are less and less words that are paid to be reviewed.
The business models will change because the main driver, that’s I think a critical insight, the main driver of the cost of translation, the word prices for whatever language and so on. The main driver, what that price is, is how much time an actual human being has to look at this segment to make sure that it’s okay. That’s the main chunk of that.
As the prices go down, it’s going to go down mostly because the amount of human effort that is required is itself going to go down. So that means that at some point we’re going to reach some kind of situation where the business model is no longer driven by the unit number of minutes per word. That is going to have to be applied to a word.
It’s going to be that the business model will be driven by the cost of maintaining the system, by keeping up with the Ais, by how much a token costs and things like that. How good am I at minimizing the number of tokens that I consume while maximizing the outcomes of my translation, and how much my content is performing, improving my German page performance, in the example I took earlier.
This is what’s going to be driving the thing and so that means that I think that as far as business models go, I think we probably want as an industry, ultimately, maybe with the exception of certain issues, cases and so on, I don’t pretend to know about everything in the future for sure, but I think one of the direction we are going is, we are going towards more subscription model type things.
Where it’s about paying a subscription to a highly sophisticated technology stack that is maintained by a vendor, that is highly curated by humans who know what they’re doing, but the curation of the stack by humans is not reviewing every segment that comes out. The curation is sampling and making sure it works, tweaking it here because we realize it’s drifting in that direction or it’s making mistakes in that case and blah blah and kind of doing that.
So that is a more predictable type of cost which is almost not driven at all by the number of words that are going through it. It is in like large scale and order magnitudes, but it doesn’t make a difference whether you have 10 words or 20 words going through it in your job. Ultimately the cost is going to be the same.
So, I think that this is how this changes the business model. So, it’s very much a revolution that is going on in the industry. I think that ultimately again, timing, your mileage may vary in how fast you think that’s going to happen, but I even think that the core localization stack is going to disappear.
I think the whole TM based CAT tools will disappear. The logic will be much more akin to the way AI is implemented in content systems and so on, where it’s about generating some AI generated thing and then having some sampling and statistics about how close what your AI is doing is to what you want it to be, as opposed to reviewing every single thing.
So, like I said, new business models. Possibly around subscription because it’s about maintaining a stack as opposed to counting words and the fact that the number of words that will be reviewed by humans will decrease and so on.
What I find interesting about that, if we’re talking about linguists for example, which are obviously a major community stakeholder of this industry, which when they say that they might feel, that what I’m saying is not right or whatever and it shouldn’t be that way or something or that I’m somehow dismissing the value that they bring to the table.
On the contrary, I think that it’s the other way around, is that the value that we expect from linguists and people who have these kinds of skills, is in fact now much more important than before, because it’s not about kind of yet another word in a stream of a million words that I’m reviewing. No, I’m actually reviewing the thing that is going to be used to make sure that the system works better.
Therefore, the level of expertise that I need to apply to this particular review is much more critical than before. And because a lot of LLMs and, you know, not a lot, in fact, because LLMs pretty much by design are natural language driven, meaning that the way you make them better is by, you know, don’t want to caricature, but it’s basically by talking to them better, like by being clearer in your instructions and kind of maybe splitting tasks into different things and so on.
But because most of the instructions are natural language, that’s a kind of a role that I feel like linguists have a very natural path to slide into if they are so inclined.
Sultan Ghaznawi
I was going to ask you more about whether the industry perceives LLMs as a competition and then you’re right, you answered that it’s not a competition. It’s more basically an evolution, if you will. We need to accept that and adapt to it, which I would like to ask a follow up question related to that.
What type of retooling and rethinking and repurposing should we be accepting or adopting in order to not just survive, but adapt to this new changing landscape?
Vincent Henderson
Yes, I think the retooling has to do, one of the one of the concepts that we are developing, let’s say, or aggregating around at Lionbridge is the concept of total cost of ownership of content, global content TCO. This means, and it’s related to something that I was talking about earlier, which is about how do we optimize that system that I was talking about, in terms of how it enables the global content to perform the job that it’s designed to perform, like how does how do we make it such that if I’m noticing a dip in performance in my adjournment page, I can do something about it.
That’s really critical. I can actually do something about it, I can go back and rechange the whole terminology of that, the whole website, if, for example, I’m noticing kind of search term drift or whatever. So, I think a lot of the retooling will come from there. And I think that also means that it has a retooling component on the customer side also.
Because there’s one thing that’s for sure, is that, and we’re seeing misconceptions like that in some of our customers today, some people believe because the LLMs are so alluring in their sense of like, hey, you know, they understand everything I’m saying. We see a kind of an expectation that it’s magic, right? That AI, this new AI stuff, my God, you know, all you have to do is kind of tell it what your problem is, and it’s just going to fix it for you.
It’s not like that. Like I mentioned multiple times during this conversation, scaling LLMs at production scale for content processing, in particular, is very hard. Because you need reliability. You need predictability of the system. You need to make sure that whenever you ask the LLM to perform this particular task, what comes out is a thing that fits into your next automation step. And it’s actually good. And you can rely on it. And you have the data to back it up and all that kind of stuff.
You have to do this dozens of times in a row, right? So that’s what scaling LLMs means. So, the, like, random companies out there who think that LLMs give them the opportunity to kind of, hey, now I’m just going to put my developer on. Hey, build an LLM that does my translation for me. I think that it’s a misguided way of seeing how these things work.
That’s not the retooling that they need to do. Let’s put it this way. That’s how to answer your question. The retooling that, in fact, on the other hand, would be very interesting and very useful would be the retooling, which is, I think, fairly nimble to process the content performance data, whatever your Adobe analytics or Google analytics or whatever, all of that, whatever stuff that you use to kind of measure the performance of your content to retool that, so as to export from it or to get from it to, to extract from it or whatever, a subset of this data that is geared to inform the editorial process and therefore the global content process.
Again, if I’m noticing my German page performance drifting, this needs to be noticed and this needs to have a bunch of data. I need to know, is it a rich problem? Is it a CTR problem? Is it an engagement problem? Is it that people are not reading all the way down? Is it that they’re not clicking on the links. All of this kind of stuff, which are the problems that they have.
Because we have the content of that page and we have the context of that page, we are able to start to make hypotheses, fix them, republish them, maybe AB tests them to see whether the performance is really indeed increasing. And then learn from that, from the point of view of, you know, did this improve the outcomes and build that back into the customer AIs.
All of this that I’ve just described is something that can be largely automated, right? This is not something where you have somebody kind of getting the page, looking at it, rewriting it. I’m talking about giving instructions, using LLMs, doing it smartly. To say, hey, rewrite this page such that this terminology is used instead or make the style more like this and like that.
Then you can publish that version, do your AB testing and so on. And if it did improve the performance, then you can get these instructions and build them into your customer AIs that have been doing all of your post-editing and QA and so on, in your localization pipeline. So that requires a lot of retooling.
We have also been involved for the last year or so, it’s an ongoing process. In retooling basically, the whole localization pipeline. So, we have, under the brand of Aurora, it’s a new kind of AI automation brand that we’re starting to use to identify this whole kind of retooling that we’re doing.
So, Aurora is essentially an orchestration platform, which is connected to what we call our language AI platform. Which is a very sophisticated AI platform, that is exposing a lot of AI services. When I say a lot, it’s several dozen different types of AI services that can be called and orchestrated by this Aurora platform, to perform those different tasks.
Those tasks may be translated this, post-edit this, do QA on this, review this from this perspective, apply this style guide on this. Is this done properly? You know, all of these steps that are using AI to get to the point where, and each of those steps that I just mentioned itself is broken down into multiple tasks, right? And multiple LLM tasks.
So the idea here is to retool your workflows such that you have this kind of orchestration, that you have very significantly in your AI platform, a customization, let’s say a layer, where you can, even though you have like this standard, let’s say workflow that is using AI that is doing post-editing and QA and things like that, that the AI that you’re calling to do that, actually knows about your style guide, knows about your content, knows about your product and so on.
This is where the context thing comes in, right? So that this whole retooling of managing context throughout the language processing workflow and making sure that the context is, cycled through the process and that later steps are aware of previous things that have been done and things like that.
That’s what retooling is. This is a fairly complicated problem because of all kinds of limitations of LLMs. Size limitations and even LLMs that have nominally like 120,000 tokens of input, or some of them now have a million or whatever, we know right from testing. First of all, most of them have a very limited output, right?
So, one of the constraints that we have in in localization is that we need to have an output which is it’s a comparable size to our input, if you know our input would be a bit larger with all kinds of instructions and in fact much larger like several tens of times larger but it’s still not like four thousand against 1 million right.
What we do know is that if we give let’s say 120,000 words tokens worth of instructions for a few segments, most of the instructions will be lost. Despite what anybody says despite all of the stuff about now I can reason, you know there’s all kinds of benchmarks online for things around kind of reasoning and stuff like that.
The fact is that doing language interventions on specific segments is a delicate thing. Language is a delicate thing and it’s not easy to provide really detailed and ambiguous instructions about what needs to be done to language specifically. So, it’s not possible to do that in a very specific way and that’s not possible to scale it by just kind of stuffing the context.
You have to scale it by focusing the LLM on very specific tasks and instructions that you apply in a chain. So, there you go these are the two I would say retooling things that need to happen on one hand. Getting the data from the content performance so that we can do something about it, that if your German page performs worse, you can do something about it and you can feed this back into your localization process.
Secondly, retooling is related to orchestrating sequences of tasks and breaking down the localization tasks into a set of well-defined actions and you’re tooling that can manage that.
I would say that’s a big part of it. The third one I would say in the retooling kind of stack is what I call the trust framework. I mentioned trust earlier, but I didn’t get into it in detail. We have this kind of framework, it’s what we call trust, which means transparent, reliable, useful, safe, timely.
Trust and that’s a way of approaching the deployment of AI in operations that aims to ensure that indeed, AI is transparent, and by transparent, I mean we kind of know what it does. It does very well-defined specific things, so it’s not like we’re hiding what it does. It doesn’t mean of course that the AI itself is fully explorable. We don’t know exactly why it decided to make this word there, instead of that word whatever the whole transparency of the AI models itself, is a whole question of itself.
That’s not what we’re talking about, we’re talking about transparency, is the transparency of what we’re asking it to do and whether it’s doing it properly, empirically.
Reliability, that means of course that it needs to do what we tell it every time we tell it and we need to have data to back that up and so forth, we need to make sure it does what it’s expected to do and not something else.
Usefulness means that we don’t just apply AI because it’s cool or because it looks nice or because we can put it in our brochure. We apply AI in a purposeful manner, on use cases that add value to something or other. That makes it possible for us to reduce the localization price, make it possible for us to make it better, and more reliable, and make it possible for us to make the content do a better job. So that’s useful.
Safe, trust right. So safety, that means of course you need to control what it’s doing, you need to make sure that the AIs you are using are not increasing the risk profile of human activities and you know that involves kind of checking and making sure and sampling and kind of doing pre-testing, validation of your AIs before you put them into production.
All of this kind of stuff finding ways to mitigate issues that you have, ahead of time and things like that.
Of course, a timely, well that’s kind of pretty obvious, it means that when applying AI means that we should be able to do things faster, as opposed to things taking longer. I mean it’s a bit obvious but it’s good to mention that the reason why we do this is for that reason. That whole framework of checking, verifying, QA’ing, measuring and so on, that’s a whole new toolkit also that needs to be put into the stack and into the workflow.
Sultan Ghaznawi
I was going to ask you more about how AI itself has been adopted in our industry. There are biases for obvious reasons, but I think we are past that point already and in the past, we were limited to text translation. As you know, this industry is known as translation industry followed by interpreting and a bunch of other use cases language related.
That’s the communication basically what it was primarily delivered as a channel in text format only. Today we have audio, video and so many different types of media. In closing as we are approaching the end of our conversation, I would like to ask you how AI will transform the language industry with prevalence of non-text media and other modalities
Vincent Henderson
That’s indeed like the next big thing. So, I often say that we are right now more or less for multimedia content, where we were for text content like a year ago. So, you know there’s a kind of a gap. By that I mean that it does some pretty impressive things, we can see where it’s going, we see evolution, it’s evolving very fast and so on. You can also already use it and it’s been the case for a few months.
You can also already use it at scale to produce some fairly commodity content, but it remains shaky and relatively unreliable and so on as far as generating visual or audio content that you really care about at scale. So, that was where we were with text a year ago. It was great, you could do a lot of interesting use cases but basically scaling it and relying on it at scale, was kind of a bit dodgy unless all you wanted to do was generate some random tweet about some boilerplate kind of topic.
So that’s pretty much I think where we are with visual and audio stuff right now. Again, I’m talking about at scale industrially right there’s a lot of cool stuff that people post on Twitter like where they spent some time doing voice stuff, for new images and so on. There’s a lot of cool stuff that they can do, that they couldn’t do before.
It doesn’t mean that it scales industrially from an automation perspective, so having said that, the interpretation capabilities, the image interpretation capabilities, meaning the ability for AIs to look at an image and figure out what it is and what’s in it and so on, that is definitely there, that has arrived right.
It’s been there basically since you know GPT4 or GPT4 vision, but GPT4 already stepped it up, so right now we are already starting to use image inputs in some of the solutions that we’re developing using AI and that has to do a lot with gathering context information and interpretation relevant visual information for the text.
So that that’s a really interesting kind of component which is that if I have a piece of text, you where the context is that maybe it’s an item in a bullet list or it’s a heading or it’s the title of a page or something. Typically, in NMT workflows and things like that, you have to implement fairly significant stuff, like tools and things like that, in order for the human linguists to figure out the context of a string.
They need to go and see the website, they need to have special tools for generating, rendering the content, in context and things like that so that people know what it is and the like. With visually aware AIs who can interpret the content, you can do that. You can export as image of what your thing looks like and you can provide it as context to your string and therefore it’ll know that this is the heading and this is the bullet list and maybe when it’s a heading, it’s supposed to be in the passive voice and maybe when it’s in the bullet list, it’s supposed to be in the active voice, in your style guide.
So, without this kind of context information, you can’t make those determinations without fairly heavy lifting. So that part I think is definitely there and like I said, we’re already starting to implement that in the solutions that we use. The context game is really changing. Plus, the ability to know what’s in an image, what’s this, what kind of mood it’s in, you know stuff like that.
All of this can provide very interesting context to the language task. As far as the output is concerned, however, I think that really there’s more difficult problems. So first of all, visual content is hard. It’s even harder than language content because language has internal structure, the semantics are encoded in some kind of code right, which is the language and this makes it much more tractable by machines, there’s some kind of regularities in there and so on that machines obviously have managed to figure out in order to make sense.
That’s the conversation about what LLMs teach us about language and so on, which I think is fascinating but obviously, I won’t get into this now. But visual and audio content to some extent are the territory. They’re not the map. Language is the map, and the visual audio content is the territory. It’s the actual thing right, it’s reality in itself in a way, right?
Of course, it’s not that simple, but I guess you know what I mean. So that means that generating visual and audio content requires AIs to have a model of the world, that is far more dimensional, right? Far larger, dimensionally speaking, than language-only AIs that benefit from this internal structure of language that it can kind of hook onto.
So, for that reason, I’m more circumspect about the ability of LLM or of Gen AI, let’s say, to generate multimedia content at scale in a reliable manner, right? We’re just starting to see what the last couple of months or something, you know, some of these video generating AIs that are able to maintain objects through time, but we talk about a few seconds, right?
The drift is very fast and so on, and then when there’s a scene cut, the same character and environment will have to be in the same cut. There are all kinds of problems like this, which are clearly not solved at all, again, for scale automation, and so forth. And I’m circumspect about how fast that problem is going to be solved, but I’m not an expert in that field.
Therefore, I remain, let’s say, humbly circumspect. There are things, however, some kind of techniques, which seem to provide kind of reasonably ready from a technological perspective, which is things like content replacement. So, if you have an image and that’s obviously a core localization use case, you have an image with content in, English in there, I need to replace that in some other language.
You don’t have the source files where the layers are, all this stuff, which in fact, we have bizarrely less and less because people are expecting that things are easy now. So, replacing stuff, replacing text, or maybe even replacing some kind of object or something in an image with some other object or whatever, that looks like it’s a more tractable kind of relatively, shorter term, capability that will be available. In fact, we already have that capability in production, but for some very simple use cases, and it’s got some quirks. So, I think that one is, is a fairly tractable problem. The question though, is how reliable it is and how much can you actually remove human work from the workflow?
Because if you do this kind of stuff, but then you still need basically a graphic designer to go and open it in Photoshop or whatever, and make some modifications, check that it’s okay and things like that, as long as you have that, then you’re making the economics of this kind of stuff less compelling.
Especially in addition to the fact that if you need to pay a massive license for this software that does the replacement and the area and so on, then the economics are not yet completely, I don’t think, quite work, but I’m quite sure that they will in the next year.
So, I would say that on this topic of multimedia, while I’m very hopeful, because I also think we can hold multiple thoughts in our heads at the same time. I also think if you in eight years or whatever, because a couple of years ago I was saying 10 years, so I have to remain consistent that in eight years or something, we will very likely have the first fully AI generating motion picture, right? I think that’s very likely, and I think it would be great.
I think it’s going to be a real thing and I’m looking forward to that moment, but again, the generative use case for creating new kind of cool stuff that a human creative team kind of feels they can live with, is quite different from having very specific things that you want to do and that you need to kind of sell your product, right?
These are different things. So, I think that we’ll get there, but I think the pace will be much slower than what we’ve seen for text, because I think the economic bar is a bit higher, but I could be wrong, and we could be fully disrupted six months from now. You know, who knows?
Sultan Ghaznawi
What a fantastic and timely discussion, Vincent, on a topic that everyone is thinking about, or at least, people are talking about it, but not many answers are out there, and you seem to have some of those.
I thoroughly enjoyed our conversation today and there was so much to process and learn from actually. I’m sure everyone found a lot in this conversation that they could apply to their business practice or at least think about, and as a result, hopefully modernize our industry with AI or at least, think along those lines, so we can adopt the right tools.
And with that, let me thank you for sharing your experiences and thoughts with us. And I look forward to continuing this discussion with you in the future.
Vincent Henderson
Well, thank you, Sultan. I think that was very interesting, a good set of questions that were very topical and relevant, I think, to where we are. I agree, but anyway, it was very enjoyable. I do hope that the audience gets some insights out of it that they feel are useful.
Sultan Ghaznawi
Absolutely. That was very valuable, thank you so much Vincent.
Vincent Henderson
Thank you, Sultan.
Sultan Ghaznawi
It’s time for my round up of the interview and my analysis as to what has been discussed.
We all know that our industry is going through a transformative time driven primarily by technology in AI but also by a changing mindset both on the customer side and on the supply side. We have a new generation of linguists joining the workforce who are comfortable with using Siri and other products in their daily lives and will find using AI as a tool just part of their work.
The same is true for our colleagues who will be buying linguistic services. The next generation of localization buyers are highly talented and technology savvy young people who expect that their work should be driven by automation, both at process and product levels.
That means they will know the value of content, which content should be created by AI, and where to use human creativity and capability to create, edit, adapt, transform and promote certain types of content.
I think that opens the door for translators to become more specialized, and take on a supervisory role to the machines and their output and their own translations will become crucial for developing highly specialized AI solutions. In summary, the roles will shift, and we must all be ready to embrace the change and adapt to a changing landscape.
That brings us to the end of this episode. I hope you enjoyed it as much as I did and even if you were able to take one action item with you to improve something in your business, then this podcast has accomplished its goal.
Don’t forget to subscribe to the translation company talk podcast on Apple podcasts, iTunes, Spotify, Audible or your platform of choice.
Until next time!
Sponsor
Thank you for listening. Make sure to subscribe and stay tuned for our next episode.
Disclaimer
The views and opinions expressed in this podcast episode are those of the speakers and do not necessarily reflect the views of Hybrid Lynx.