S03E21: Speech as a Revenue Stream for LSPs
The Translation Company Talk is back with another exciting episode and this time we cover an important topic. Translation involves transfer of meaning and context from one language to one or more languages and as such our clients place great trust in our services. Trust is always assumed in translation services and in this episode we cover how we can build trust through translation.
To cover this important topic, we hear from Ingrid Christensen who has researched this topic in great depth and is writing a book about the language of trust. Ingrid is the CEO of INGCO. Among the many topics covered in this interview, she discusses trust as a currency in translation, importance of trust to clients, delivering facts to build trust through language, authenticity in translated language, maintaining and guaranteeing trust by translator and editor, role of standards and certification in upholding and guaranteeing trust to clients, impact of data breaches and cybersecurity incidents on perceived trust by clients, impact of using automation in translation and much more.
...what really interests us in our business is how voice and language interact. How do you use voice to convey information in another language, either through translation or through the different ways of using voice? That could be dubbing, narration, and just simple audio converted from one language to the other.
Renato Beninatto
Topics Covered
Speech as a Revenue Stream for LSPs - Transcript
Intro
Hello and welcome to the Translation Company Talk, a weekly podcast show focusing on translation services in the language industry. The Translation Company Talk covers topics of interest for professionals engaged in the business of translation, localization, transcription, interpreting and language technologies. The Translation Company Talk is sponsored by Hybrid Lynx. Your host is Sultan Ghaznawi with today’s episode.
Sultan Ghaznawi
Welcome to another episode of the Translation Company Talk podcast. Today I have invited my friend and industry thought leader Renato Beninatto to join me and talk about what the proliferation of speech and audio means for our industry.
The author of the General Theory of the Translation Company, Renato Beninatto is recognized as one of the most experienced and accomplished experts in the translation, localization, interpretation, and language services industry. Renato has served on the executive teams for some of the localization industry’s most prominent companies and founded two of the most prominent market research and consulting companies in the language services space. Renato was the president of ELIA or the European Language Industry Association and also an ambassador for Translators Without Borders, a non-profit organization that provides translations for NGOs. He was also the vice president of ABRATES or the Brazilian Translators Association and a former advisor to TAUS, which is the Translation Automation User Society.
He’s a frequent speaker on globalization and localization issues at industry events and universities around the world. He’s a native Brazilian living in Seattle who speaks five languages and has lived in seven countries around the world. Renato is the author of three books on global business and founded Nimdzi to provide insights to investors, analysts, buyers, and suppliers of language services.
Welcome to the Translation Company Talk Podcast, Renato. How are you?
Renato Beninatto
Very good. And thank you for inviting me over again. I really enjoy our conversations here. Great to have you again on this podcast, Renato. We spoke recently on another topic and you’re always up to trend. You know everything that’s happening.
Sultan Ghaznawi
Give me a quick introduction for people who are listening to you for the first time and just tell them what you have been up to. I know you’ve been travelling a lot.
Renato Beninatto
Yes. So, as you know, I am the co-founder and chairman of Nimdzi Insights, which is a market research and consulting company in the language services space. And we look at everything related to the language industry from trends to technologies and the combination of business practices and client expectations. So, the good thing, Sultan, is that we don’t get bored in this business because there’s always more and more demand for translation and language services.
Sultan Ghaznawi
That’s always exciting to hear. Now, how is the industry performing this year? I mean, we are almost at the end of 2022. How was this year for us?
Renato Beninatto
Look, we’ve been looking, as you know, we’ve published the Nimdzi 100 every year and we’re in that period of the year that we start looking back at what happened and what we predicted was going to happen and so on.
So far, when we looked at the half-year results of the top companies, the ones that have, that are publicly traded and share their information, it’s been a banner year, another banner year. We’re looking at companies growing in the double digits and if you factor in exchange rates because UK companies and European companies, the euro and the pound have devalued in relation to the dollar. So, a lot of these European companies are invoicing clients in US dollars and the revenues will increase significantly when converted from the reporting currency into the US dollar, which is the currency that we use to report the ranking of translation companies.
So, either if you look at the growth on their original currency or in US dollars, it’s been a banner year, another banner year. This is an industry that keeps growing. So, positive outlook.
Sultan Ghaznawi
Speaking of that report, Renato, how do LSPs get their hands on that report that Nimdzi publishes and to use that intelligence to their benefit? For example, 2023 is right around the corner. They could be making decisions in terms of strategy and so forth. I’m pretty sure that that report will be critical for them to get that information. How do they find it?
Renato Beninatto
Well, the one for 2022 is available on our website. It’s called the Nimdzi 100. The half-year report, it’s on our website. If you go to the Nimdzi 100 and look for latest articles or things like that, I think we published it in the beginning of October, this article about the growth that we have noticed in the first half of the year. And it’s a free article. It’s available on our website. And the actual Nimdzi 100 that we’re going to publish is going to come out late February, early March next year as every year. So, it’s nimdzi.com. N-I-M-D-Z-I.
Sultan Ghaznawi
Let’s shift our focus, Renato, to a topic that’s very dear to me and you have a lot to talk about it. I’ve heard you speak about this before. Our industry has always had to deal with speech and audio and video formats since, I guess, the 60s. I want you to talk about how speech is turning into a service that is distinct from interpreting and text translation. Give me a high level of overview, please. What’s happening with speech?
Renato Beninatto
Okay. The speech is a very broad topic, right? You mentioned interpretation. It’s an area that has seen a lot of progress. And interpretation is a service that is mostly humans translating humans orally, right? It’s a very straightforward type of service. You listen to information in one language. You convert it in the brain of the interpreter and the interpreter spits out the translation automatically into another language. This has been around since the late 40s, early 50s with the Nuremberg trials as the concept of simultaneous interpretation, consecutive interpretation. Some people will claim that is the second oldest profession in the world, right? It has been around since time immemorial.
But what people like to talk now, what we’re talking about is this type of interactive voice, use of voice in communication between humans and machines or this voice assisted systems like Alexa, Cortana, the Google Assistant, and you name it. There are several of them. So, this is the human machine interface where we all know how that works. And this is actually a technology that has advanced a lot in the past few years but has been hitting a ceiling. We’ve recently had news about layoffs at Amazon in the Alexa group mostly, but not because Alexa isn’t performing in terms of voice technology, voice recognition and action on that voice recognition, but because they haven’t been able to transform that user interface into a business model to monetize Alexa, right? People use it to ask for weather information, listen to the radio, kids use it to help them with their homework. But the goal that Amazon had originally was to use it as a channel for people to buy more stuff from them. In that sense, voice is at a mature stage.
On the other hand, what really interests us in our business is how voice and language interact. How do you use voice to convey information in another language, either through translation or through the different ways of using voice? That could be dubbing, narration, and just simple audio converted from one language to the other. Not talking about automatic interpretation, which is another field that is at its very infancy. It’s not mature yet to be commercially available, even though many people try.
But at the seed of this is this concept of the audio content that is converted into text. It’s translated in the text format, and the output is that original content in another language. And one way, the way that we have done this traditionally is through a human, right? So, somebody transcribes the text, and this function of voice to text is very automated these days. There is multiple, I mean, you carry a cell phone, an Android or an iPhone, and you can dictate in any language that you want.
The next step is how do you translate that to another language, and how do you transform that voice from text into voice? The way we do it today is you get an artist. If you have a female voice, you get a woman. If you have a male voice, you have a man. And they read the text, they read the script, they act that script in movies, TV series, theatrical events, and so on. So, there is a re-enactment of the original content in a foreign language. And this is how it has been done traditionally.
Now the new area of development is how do you, because there is so much audio and video content coming out, podcasts, streaming, movies, videos, documentaries, you name it, YouTube and other platforms, how do you scale? And technology comes into play when you don’t have enough resources to scale. There is more demand than the supply of talent to develop the output, right, the voice in a foreign language. And we are in that threshold of what is automated and what needs to be done by humans. And these are exciting times. We are still developing standards, expectations, rules, and so on, when you can use one, when you can use the other. But there are some areas of the voice technology that are much more advanced than others.
Sultan Ghaznawi
Thank you, Renato. At a high level, you talked about, you know, this is very broad. There’s so many subfields of voice or speech. But what applies to us at a high level, who are the main stakeholders for speech business in terms of buyers, users, and suppliers?
Renato Beninatto
Look, it’s hard to pinpoint one specific area, right? Voice is everywhere these days. You get into an elevator and the elevator will speak to you. That is an opportunity for development of voice. I remember going to Japan and entering an elevator and they say, I don’t know, what I imagine is good morning, first floor, second floor, 10th floor, 100th floor, and so on. And that could be translated, right? But today, a lot of the voice is coming out of… Okay, let me go back a little bit before I go forward.
A lot of this technology is developed from the military space, military demand. You will have systems monitoring calls, monitoring TV shows, monitoring spy channels or whatever that is. And there is a huge amount of content that is captured, needs to be transcribed, translated, and analysed by specialists, right? So, a lot of this technology of voice to text had military application after 9-11 when there was a huge surge in demand, especially for languages where you didn’t have many translators, many professional translators. And this is a story that is already over 20 years old. That was at the origin.
Today, the demand for voice comes from the enterprise, from marketing departments that need to promote product trainings, e-learning inside organizations. And then there is what we would call the theatrical side where you have streaming channels and the number of channels streaming content around the world is astounding. It’s in the tens of thousands of channels that are producing or reproducing content every day throughout the world in multiple languages, and they want whatever production is generated in a language to move into other languages. And this is not something that is, let’s put it, US and Europe centric. There is a huge development and demand for content, audio content and video content with the accompanying audio in Asia. So, Koreans are very famous for producing a lot of movies and TV series that are very popular throughout Asia, and you have a lot of demand for Korean into Thai, into Tagalog, into Chinese content. So, this is something that is happening all over the world. Entertainment is a big driver of the voice demand, I would say.
Sultan Ghaznawi
What type of activities and output is involved in speech processing business in the context of the language industry? What do we receive and what do we return to our customers?
Renato Beninatto
So, this can go, I think that the number one application today in the enterprise is related to training, e-learning, onboarding of new customers, of new employees, situations where you have multimedia content that needs to be consumed by a large number of people. And that goes not only for the employees of the company, but also for certain clients. I’ve seen, especially after the pandemic, where, or during the pandemic, where you couldn’t be providing training for experts. I’m familiar of an account that is in the automotive industry, and they were training and updating mechanics throughout the world about the features of new models that they needed to learn, how to install, fix, and address things in the setup of the automobile.
An interesting thing also has to do with the human behaviour around this, right? People are very comfortable nowadays, and I’m one of those people who not too long ago thought that this was a behaviour change that was going to take some time to happen, but it’s happening. People are very comfortable talking to machines and giving voice commands for things to happen.
But voice command to interact with a machine doesn’t mean that there is a business model around that, right? That’s voice recognition and training with the instructions or commands that are given. I think that the value on the enterprise is to transform, I don’t know, and I can see this in my day-to-day, there is transcription software that allows you to capture conversations and in this meeting environments that exist. And it’s more and more important to be able to capture those transcripts in more than one language. And those things need to be summarized, translated, and distributed inside organizations. That’s a practical application that is currently possible in the market.
Sultan Ghaznawi
There are so many use cases, as you pointed out, and speech is probably the most natural way, well, the only natural way for humans to communicate with each other. And now computers, as you said, are intelligent enough to use the same medium to talk to us and understand us. In your opinion, Renato, what is the next step for speech to become more natural and useful? How can we make machines more intelligent with speech?
Renato Beninatto
Okay, this is the area where I think we’re in the infancy of some very interesting developments. I’ve been able to see recently a couple of companies showing demos of voices using artificial intelligence to create what they call emotional virtual voices. And if you try, and there are some companies, and I’m sure Hybrid Lynx has a solution for that. There are companies that are today, and one of the companies that I own is Multilingual, the magazine for the language industry. And we have a podcast called Localization Today, where for a certain period of time, and occasionally more and more frequently, instead of using humans to read the stories, we have bought artificial or virtual voices that can put a little bit of emotion in the reading of the content that we provide the platform, right?
The other thing that I have seen, and which I find very interesting in the space of voice, is using professional voice talent to train virtual voices with their voice, with their tone, and paying this voice talent artists royalty whenever their voice is used in a commercial environment.
And there is a tool called MateSub that does a very, no, Matesub is subtitling, MateDub, that does a very good job. It cannot do dubbing, but it can do an excellent job in narrating content with a human voice that will provide inflections and emotion to the text that is read. So, you provide the text, the voice reads that and transforms that into an output. And this is something that I find intriguing, I find interesting, and I find fair, that you’re letting, you use automation with the voice of a real person and not a synthetic voice. But there are hundreds of those available in the market, and they’re very affordable, to use a good word there.
Sultan Ghaznawi
Well, thanks for mentioning Hybrid Lynx, Renato. This week we launched actually, the DoSTT AI project. So, my company has made it available in beta to LSPs and folks in the localization industry who are interested. We developed a platform that turns speech into text in a number of languages and translates it on the fly. So, if you have like speech in German, you can request an English transcript for it. And if anyone’s interested, please reach out to access the beta for free and unlimited use. That was a bit of a shameless plug there for me.
Renato Beninatto
That’s fair.
Sultan Ghaznawi
Do you think such tools have a place for transcription like machine translation did for text? In order to basically create the foundation where a human-in-the-loop process can improve the quality and make it usable?
Renato Beninatto
Yes. And this has been the case for a while already. This is not necessarily revolutionary. And there are tools like Rev and even Amazon provide some of these services around voice recognition and transcription and so on.
But the interesting thing is that what used to be an exception, what used to be the niche case is going mainstream. So, every client now can expect to get a transcription from a meeting or a presentation in minutes instead of hours and days. And the technology is mature enough that with a little bit of post editing, that transcription is a very accurate tool for you to keep track of and define what actions need to be done or make decisions based on the information that you get. I am an avid user of services like that. In fact, in my company at Nimdzi, we have an add on to our meetings where we have automatic transcription of all our meetings. So, it makes it very easy to write summary notes and send back to our client with the feedback on the things that had been discussed and makes note taking much more interesting and generates some statistics around that and so on. And this has become almost an expectation and it has affected my behaviour because I got so used to this transcription feature that I seldom take notes these days because I can just go back and refer to the transcription.
An interesting point here though, and it’s an area where there is opportunity for development, is that the transcription is done in English essentially. And you have to manually select if you’re changing. This happened to me this morning. I was having a conversation with a client in Spanish, and I forgot to switch the transcription from English to Spanish. I think they only have it available in four languages.
But automatic language detection is something that I hear Meta is working on, on the voice side and videos and so on. And that’s an area for expansion that I think is quite interesting.
Sultan Ghaznawi
From what I understand, the speech services area opens many opportunities, as you mentioned. For example, text-to-speech services such as ASR requires large corpus or amounts of transcribed audio speech to train machine learning models, right? So English is really not a problem, as you mentioned, given the amount of data available on the internet for English. How can our industry as an expert in languages, in foreign languages, bridge the gap for those languages?
Renato Beninatto
You know, this is a mixture of commercial push and academic interest, right? There are the long tail languages have very little content to be able to generate the type of information. However, training a voice model is, I think, easier than training a whole language translation model because language translation requires pairings and translation memories and comparisons and things like that.
But the audio sound, I mean, you can have, theoretically, somebody read a dictionary which would take, I don’t know, days, not years, and create a model for that language. And there are many initiatives along this way. The commercial languages, if you, like I said, if you, on my iPhone, I joke that my weakest written language is French because I learned it as an adult and sometimes, I struggle with some accents. But when I dictate, the iPhone puts accents perfectly. So, I don’t need to worry about it. When I need to write in French, I just dictate it, right? So, I think that one of the things that I noticed in the latest events that I’ve attended and when we’re talking about AI, machine translation, and language technologies, is that when it comes to the top 20 languages, we have already reached a level of development, we have plateaued in the development of the language models that we have. There’s going to be very little improvement in machine translation from French into German, from English into Japanese, and so on, because we’re pretty much, the marginal improvement is very low from more training. They’re trained enough.
So, the focus, especially for a company like Meta, has been to expand this technology and their interest into their program that they call, No Language Left Behind, where they’re looking at a matrix of 200 languages combined with 200 languages. So, there is a lot of leveraging going there. And these days, when you talk about language models, you are intrinsically addressing the voice and audio element, as well as the text element in that. There is a challenge, but this has become less of an issue because speech and audio require a lot more storage than just pure text. Text is very simple. But this has not been a major constraint in the development of voice requirements.
So, in the long tail, it’s going to take a while. In other languages, the efforts are there. We’re still in the dawn of this technology. And as new markets come in line, the application of voice commands is going to extend into other languages. I wouldn’t be surprised if we hear about voice technology for African languages in the very near term.
Sponsor
This podcast is made possible with sponsorship from Hybrid Lynx, a human in the loop provider of translation and data collection services for healthcare, education, legal and government sectors. Visit HybridLynx.com to learn more.
Sultan Ghaznawi
You talked about this earlier, Renato, but this is something very interesting that I keep an eye on. I’m sure you’ve been seeing what’s happening with that. This is something that we could basically take and translate spoken words directly without iteration through conversion to text, and then applying machine translation, then rendering back that text as synthetic speech. So it is in extremely early stages, as you mentioned, but could have monumental impact on interpreting industry. What are your thoughts about innovation in that area, and where do you see it going in terms of potential opportunities?
Renato Beninatto
This is an interesting conversation, Sultan, because you have opposing forces playing a role in this space when it comes to interpretation, right?
On one side, the virtual interpretation environments where you don’t need people to be present in the same location to make simultaneous interpretation happen had an amazing boost. We used to say that virtual interpretation or remote interpretation was a technology in search of a problem, and then the pandemic made the problem real, and the technology, the solution was there.
However, after the pandemic, that market that had an amazing push for innovation and development has lost momentum. And I think that what was once a solution to a new problem is now just becoming a feature inside the meeting environments, into Zoom, Teams, WebEx, and all of that. And the technology in there is going to be more of a feature. The value in having human interpreters will come from making it easy to on the fly identifying interpreters, bring them online.
So, I think that in the interpretation space, we’re moving into more of a marketplace environment than automation so much. We know that Zoom and Microsoft are working in attempts to create automatic interpretation, just using technology and voice recognition without the text as the intermediary. But it’s very, very early stages. I haven’t seen any convincing demos that something like this is anywhere near completion. You can see the text-to-text and the text-to-speech work, but it’s still very buggy, it’s still very early, and I don’t know that there is a huge demand for that now. I think it’s going to take some time. I think we’re in the early stages. I said I think too many times here, so let’s move on.
Sultan Ghaznawi
Renato, how do you see LSPs being able to scale up to offer speech on top of their text translation offering that they’ve been traditionally doing? So how can they leverage that capability to scale up into speech?
Renato Beninatto
That’s a natural extension of the services that they provide, right? LSPs adapt. LSPs, you’ve heard me say before that we don’t create anything in the language services industry. We’re constantly transforming. So, this is just another form of transformation of content that LSPs have to handle. The ones that have needed to do this have stepped up to the plate and have delivered because the way you deliver today is less important than whether or not you’re delivering, right?
The challenge is how do you say yes to your client? How you’re going to do this in the background doesn’t matter. But as it scales, as the volumes increase and there is more demand for transcription and voice-related services, automation creeps in because you cannot do everything manually because there are not enough talents. And there is an element of scalability and timeliness that clients don’t have any patience to wait too long for a project to be delivered. So, this automation becomes critical. I think that the role of the LSP is going to be the role that the LSP always has in any language-related service is to facilitate this transformation from one language to another in whatever format it is and manage the projects around that. So, the core competency here is going to continue to be project management. The voice resource, the transcription, and the output in another voice format are just resources that a project manager needs to manage within the customer relationship framework.
Sultan Ghaznawi
How do you see the impact of speech localization and generation on text content? Would it cause a reduction in people translating documents as more people use the speech medium to talk to machines?
Renato Beninatto
No, I don’t think so because it’s not an either-or situation. It’s not a replacement situation. It’s a co-existence situation. And we didn’t even get to talk about the area that drives a lot of this demand. We talked about the military, but there is also, and there is more and more demand in the accessibility space, right? Voice is extremely important for the blind. We talk a lot about ASL and sign language communication for the deaf and hard of hearing community, but there is blind and visually impaired. There is a huge demand for, and there has always been a huge demand for audio and transcribed content for this community. So, there is a whole element of language access and social justice and diversity and inclusion of communities that don’t have the same access that most people have, right? So that’s another driver for growth and that has always been there in the background for this kind of stuff. So, Sultan, we shouldn’t fear one replacing the other. I think it’s something that will co-exist, and the automation will address things that don’t require an extreme level of quality and humans will be taken to the niches where it’s very important for them.
Sultan Ghaznawi
Speaking of that, what do freelancers and translators need to know about speech? How can they benefit from this lucrative market?
Renato Beninatto
Use it first. Okay. I will tell a little anecdote here. When I started in the industry back in the 80s, I used to dictate. I translated novels and adventure books that were sold in newsstands, and I was translating a book a month. And the way I produced this is that I dictated it and my wife would type the translation and we would be doing translation and editing, this is before computers, at the same time, right? And collaborative translation at the same time. Today, you don’t need that. The need for the typist is gone because technology takes care of that part.
So, for the freelancer and the professional in this space, there is the opportunity to use the voice as a productivity improvement. You can dictate your translation and you can post edit your translation in any environment that you work. I mean, you get any tool that you have in your work environment, Microsoft Word, you click a button, there is a function to dictate. On your iPhone, you can dictate and transform things into text. So, first of all, learn how to use it as a productivity tool for you.
The other thing is that not to be afraid, is to promote that kind of stuff. There is an opportunity, like I mentioned, with companies that are synthesizing and transforming voice talent into synthetic voices that can generate royalties for them. And I think that the opportunities are, we’re just looking at the surface of these opportunities. There are opportunities for translators to dub content, to have home studios.
One of the things that has been transformed in the voice space is that you used to be required to go to a studio that had amazing microphones and filters and soundproof environments, and you needed to have that facility in order to do a very good dubbing or narration. But today, the technology has, the prices of technology have dropped significantly. You can have an amazing professional quality microphone for 50 bucks. You can have mixing tables and a professional in the voice area. If you have a good voice and if you’re a good narrator, there is a huge opportunity to expand and grow in this market without having to break the bank and without having to leave home. I think that that’s an area where voice is also changing and there is a lot of opportunity for freelancers.
Sultan Ghaznawi
How should the industry prepare for a speech in terms of tools, in terms of talent? For example, the internet giants are continuously innovating. How can we respond and become a supplier to them?
Renato Beninatto
Okay, so you say that the language industry is slow to adopt new technologies, but it’s very fast to copy success stories, right? As we know, this industry has very low barriers to entry. I think that for the industry as a whole, the most important thing for us is to be informed, to be aware of what is going on. Part of our job at Nimdzi is to track these developments in the industry, but any source of information, keep an eye and an ear open to what is happening in the voice space because it’s seeping into, it’s going to be part more and more of, it’s just a different form of output for the things that we do.
At the end of the day, like I say, one of the things that has remained permanent in the language industry, one of the things that doesn’t change is that we convert content from one language to another. That is the core of what we do. So, the way, the format, the process, the structure, and the technologies that we use don’t change that core demand for converting one type of content from one format into the other.
The good thing is that the market behaves, we are as individuals, parts of the market, and we behave the same way that the market does. If you are, if your kids are comfortable using voice technology and you have seen many of these stories and jokes of kids asking Alexa to do math problems for them and things like that, if they are comfortable with it, it’s too late. The technology is already part of our reality. The future is already here, it’s just not evenly distributed, is a reality.
All the elements for creative ways of using voice output are around us. It’s just a matter of bringing them together. We always end up talking and mentioning this thing that we don’t know what it’s going to be, but it’s going to exist, this metaverse, this new way that we interact with technology that is enabled by the amazing bandwidth that comes with 5G. This is going to create new demands. This is going to create new technologies, new platforms, new outputs, new formats.
So, our job is to be attentive, to be prepared, and to be ready for the change when the change comes, to be ready for the demand when the request from a client comes to you and you’re aware of what’s going on and what’s available. And kudos to you, Sultan, for taking the lead and starting to incorporate that into your offering as a LSP in this space.
Sultan Ghaznawi
Thank you so much, Renato. I wanted your wisdom today, and I think we got a lot here. That was a very fascinating discussion, and I thoroughly enjoyed talking to you, as I always do. I’m pretty sure Nimdzi will have a lot more to say about speech in the next 12 months or so, as it’s proliferating. So, keep an eye out for Nimdzi reports. They are basically guidelines for the industry in terms of what to do and what’s coming next.
With that, I want to thank you for sharing your expertise and experience and perspective with the industry. I hope we can do this again soon.
Renato Beninatto
Anytime, Sultan. It’s always a pleasure talking to you.
Sultan Ghaznawi
Our industry is undergoing rapid shifts in terms of forced innovation. I say forced innovation because we are always in a reactive mode. Speech has been in research and commercially used for decades, but only now our industry is taking notice. We did provide manual transcription in the past. We have not really looked at how we should leverage our expertise and experience from that type of work to address the need for speech services that exist today. I think as more people will move to speech-based interaction with technology, we will see demand for speech-related services increase. We must be ready to rise to the challenge, and now is the time to think about it.
That is a wrap for today’s episode. I had a fun time talking to Renato. He is always thinking ahead, and our industry always benefits from his predictions. I hope you were able to take advantage of this interview today and had at least one action item to apply to your business and improve your bottom line. That would mean I have hit my objective.
Don’t forget to subscribe to the Translation Company Talk Podcast on Apple Podcasts, iTunes, Google Podcasts, Spotify, or your platform of choice. Remember to give this episode a 5-star rating.
Until next time.
Outro
Thank you for listening. Make sure to subscribe and stay tuned for our next episode.
Disclaimer
The views and opinions expressed in this podcast episode are those of the speakers and do not necessarily reflect the views of Hybrid Lynx.