Amazon announced earlier this week that it had made its home-grown Amazon Translate service generally available.
Like other Amazon Web Services (AWS), you can leverage the service across websites, apps, as well as text to speech. I should stress that this is a “neural” machine translation service — which has proven surprisingly effective at getting more natural sounding over time. Google and others are also investing heavily in neural MT.
And you can give it a free test drive; according to AWS, the “first The first 2 million characters in each monthly cycle will be free for the first 12 months starting the day you first use the service.”
The major limitation right now languages: It supports English into just six languages, which feels rather retro compared to MT services from SDL and Google. Google Translate, by comparison, is 12 years old and supports 100+ languages (of varying degrees of quality).
And more languages are coming. I can’t comment on the quality of the translation but would love to hear what others have experienced so far.
As I’ve noted in the 2018 Web Globalization Report Card, machine translation continues to gain fans among global brands — not just internally but externally. That is, visitors to websites can self-translate content themselves — a feature I have long recommended for a number of reasons.
So it’s great to see another machine translation service available at scale for organizations of all sizes.
PS: Interesting to see a recommendation from Lionbridge on the home page — a happy Amazon Translate client.
I’m excited to announce the publication of The 2018 Web Globalization Report Card. This is the most ambitious report I’ve written so far and it sheds light on a number of new and established best practices in website globalization.
First, here are the top-scoring websites from the report:
For regular readers of this blog, you’ll notice that Google was unseated this year by Wikipedia. Wikipedia, with support for an amazing 298 languages, made a positive improvement to global navigation over the past year that pushed it into the top spot. And Wikipedia, due to the fact that it is completely user-supported, indicates that there is great demand for languages on the Internet — and very few companies have yet responded in kind.
Google could still stand to improve in global navigation, as could Facebook.
Other highlights from the top 25 list include:
Consumer goods companies such as Pampers and Nestlé are a positive sign that non-tech companies are making positive strides in improving their website globalization skills.
As a group, the top 25 websites support an average of more than 80 languages (up from 54 last year); but note that we added a few websites that made a big impact on that average.
Luxury brands such as Gucci and Ralph Lauren continue to lag in web globalization — from poor support for languages to inadequate localization.
The average number of languages supported by all 150 global brands is now 32.
The data underlying the Report Card is based on studying the leading global brands and world’s largest companies — 150 companies across more than 20 industry sectors. I began tracking many of the companies included in this report more than a decade ago and am happy to share insights into what works and what doesn’t.
I’ll have much more to share in the weeks and months ahead. If you have any questions about the report, please let me know.
Congratulations to the top 25 companies and to the people within these companies who have long championed web globalization.
I remember when Google Translate went live. Hard to believe it was 10 years ago.
I remember thinking that this relatively new technology, known as Statistical Machine Translation (SMT), was going to change everything.
At the time, many within the translation community were dismissive of Google Translate. Some viewed it as a passing phase. Very few people said that machine translation would ever amount to much more than a novelty.
But I wasn’t convinced that this was a novelty. As I wrote in 2007 I believed that the technologists were taking over the translation industry:
SMT is not by itself going to disrupt the translation industry. But SMT, along with early adopter clients (by way of the Translation Automation Users Society), and the efforts of Google, are likely to change this industry in ways we can’t fully grasp right now.
Here’s a screen grab of Google Translate from 2006, back when Chinese, Japanese, Korean and Arabic were still in BETA:
Growth in languages came in spurts, but roughly at a pace of 10 languages per year.
Most common translations are between English and Spanish, Arabic, Russian, Portuguese and Indonesian
Brazilians are the heaviest users of Google Translate
3.5 million people have made 90 million contributions through the Google Translate Community
The success of Google Translate illustrates that we will readily accept poor to average translations versus no translations at all.
To be clear, I’m not advocating that companies use machine translation exclusively. Machine translation can go from utilitarian to ugly when it comes to asking someone to purchase something. If anything, machine translation has shown to millions of people just how valuable professional translators truly are.
But professional translators simply cannot translate 100 billion words per day.
Many large companies now use machine translation, some translating several billion words per month.
Companies like Intel, Microsoft, Autodesk, and Adobe now offer consumer-facing machine translation engines. Many other companies are certain to follow.
Google’s investment in languages and machine translation has been a key ingredient to its consistent position as the best global website according to the annual Report Card.
Google Translate has taken translation “to the people.” It has opened doors and eyes and raised language expectations around the world.
It has been a decade since Google Translate took machine translation to the masses — a topic for a future post.
But most companies will not be using Google Translate anytime soon to power their machine translation efforts. They want more control over customizing the engine, leveraging existing translation memories, and other capabilities that Google doesn’t yet offer. So they turn to vendors such as Microsoft, SDL, and SYSTRAN, a company that pioneered machine translation decades ago.
SYSTRAN was acquired by a Korean machine translation company in 2014 and earlier this year launched an online machine translation platform called SYSTRAN.io. This platform allows companies to leverage machine translation (and other services) via API. In other words, you don’t have to purchase an expensive enterprise license or host any software — you just connect your software to SYSTRAN’s engine. And, perhaps best of all, SYSTRAN has allowed anyone to take a free test drive of roughly a million characters of translation per month.
To learn more, here’s a Q&A I recently conducted with the company:
What are the benefits/solutions that SYSTRAN.io provides?
SYSTRAN.io allows software developers, customer experience (CX) companies, multi-national marketing departments, social media and marketing technology companies, and online gaming developers to access the same software to develop multilingual applications that were once only available to large, international companies.
How many language pairs are supported?
There are up to 50 languages supported, depending on the particular module.
What is the most popular usage model (so far) for SYSTRAN.io? In terms of volume of user queries:
So far, the most popular usage is for language translation on mobile devices.
In terms of numbers of solutions built:
Language translation within customer support forums is strong because companies and customer-support, software-as-a-service agencies can translate large numbers of documents in their FAQ knowledge base. This helps decrease call volume (the highest operational cost of customer support) and increase customer satisfaction scores because users can find their answers faster.
How do you leverage the platform to conduct “sentiment analysis” of user-generated content?
The number of available media (social media, review sites, blogs, support forums) as part of the user experience are growing everyday, companies are receiving unstructured commentary across these platforms and in many different languages. Developers using a combination of SYSTRAN.io’s modules will enable that content – across multiple languages – to be mined for information in any language, for positive or negative comments, and then can categorize those comments and generate responses in the language of the user. Imagine 50,000 comments, where 20,000 rank negative, but 500 are extremely negative and defacing, Those are the ones you want to reply to first.
For example, with the Olympics coming up, imagine a brand is sponsoring an athlete and he gets caught the night before the big race for using enhancement drugs or for cheating on his wife – it hits social media fast. How do you respond if you don’t know about it because the comments are in multiple languages? Or, on the opposite end of the spectrum, imagine many fans see an athlete wearing a particular shoe or clothing item and they want to know where they can get it – and they are asking on twitter. Right now, there are many eCommerce sites and marketing agencies that are “listening” for those tweets in multiple languages and selling to customers online. Systran.io can make it easier for developers to make apps that listen in multiple languages and then respond in those same languages.
Explain “anonymization” as a feature of your service
Because of laws such as the safe harbor act, law firms, financial publishers, and many other multi-national firms are required to remove “personal information” such as names, address, and social security numbers from any information they send overseas to their counterparts at another office. In this scenario, companies need to remove this information or “anonomyze” it from the large data set. Send a different packet or code with the personal information and their team mates can receive and assemble the data safely.
How is SYSTRAN.io different than SYSTRAN’s Enterprise platform?
SYSTRAN.io is based on the same language translation and NLP technology that powers SYSTRAN’s enterprise offering used by Symantec, Cisco, Airbus, Ford, Toyota, BNP Paribas, Daimler, Barclays, defense and security organizations such as the U.S. intelligence community, NATO, Interpol and language service providers. It is equally robust, but the security responsibility falls on the developer of the particular application for anything beyond what is already built-in for the SYSTRAN.io aspect. The enterprise server, on the other hand, offers increased security as it can be installed behind an organization’s firewall. Also, the enterprise server offers 130 language pairs.
How does SYSTRAN.io compete against existing web service offerings from competitors?
We don’t know of any pure language technology companies that are offering free usage of multilingual development APIs to developers, do you? We’ve seen technology companies attempt to enter the language translation technology space but they do not have the content necessary to accomplish viable translations. Language translation technology is easy to talk about but extremely difficult to accomplish.
For SYSTRAN, this has been our 100 percent focus for nearly half a century. Now we are opening those decades of language translation content to developers. These databases have been contributed from linguistic and intelligence knowledge workers who have compiled learnings and optimizations from trillions of translations served dating back to our first client – the US Air Force in 1968 during the Cold War – to today. Our translation databases are deep and robust.
I believe you are offering a million characters of translation for free per month – is that correct? How long will this offer last?
That is correct. Once you sign up you have a million free characters (plus free usage levels for the other API’s) per month, every month; we want to encourage people to use our tools and not burden them with a cost at the development stage. The end date is open for now.
Google Translate is the world’s most popular machine translation tool.
And, despite predictions by many experts in the translation industry, the quality of Google Translate has improved nicely over the past decade. Not so good that professional translators are in any danger of losing work, but good enough that many of these translators will use Google Translate to do a first pass on their translation jobs.
But even the best machine translation software can only go so far on its own. Eventually humans need to assist.
Google has historically been averse to any solution that required lots and lots of in-person human input — unless these humans could interact virtually with the software.
Behind Google’s machine translation software are humans.
In the early days of Google Translate, there were very few humans involved. The feature that identified languages based on a small snippet of text was in fact developed by one employee as his 20% project.
Google Translate is a statistical machine translation engine, which means it relies on algorithms that digest millions of translated language pairs. These algorithms, over time, have greatly improved the quality of Google Translate.
But algorithms can only take machine translation so far.
Eventually humans must give these algorithms a little help.
So it’s worth mentioning that Google relies on “translate-a-thons” to recruit people to help improve the quality.
According to Google, more than 100 of these events have been held resulting in addtion of more than 10 million words:
It’s made a huge difference. The quality of Bengali translations are now twice as good as they were before human review. While in Thailand, Google Translate learned more Thai in seven days with the help of volunteers than in all of 2014.
Of course, Google has long relied on a virtual community of users to help improve translation and search results. But actual in-person events is a relatively new level of outreach for the company — and I’m glad to see it.
This type of outreach will keep Google Translate on the forefront in the MT race.
I base this optimism in part on discussions I’ve had this year with dozens of marketing and web teams across about ten countries. While every company has its own unique worldview and challenges, a number of patterns have emerged. And I can tell you that there is a great deal of enthusiasm for web globalization — backed by C-level investments.
And this enthusiasm is not simply driven by China any longer — which is a healthy thing to see. Executives have a more realistic and sober view of China, and this has resulted in smarter and longer-term planning and investments. That’s not to say China won’t continue to dominate the headlines in 2014, as it most certainly will. But companies are now taking a closer look at countries such as Thailand, Indonesia, Turkey, India, and much of the Middle East.
As I look ahead, here are a few other trends I see emerging in the year ahead:
Machine translation (MT) goes mainstream. I’ll have much more to say about this in future (you can subscribe to updates on the right) but suffice it to say, MT is not just for customer support anymore. Companies are looking to use MT as a competitive differentiator, and we’re going to see more real-world examples on customer-facing websites. And customers around the world will love it. (And, no, I’m suggesting that human translators are in any danger of losing their jobs; quite the opposite!)
Responsive global websites also go mainstream. True, there are valid reasons for NOT embracing responsive websites, but for most companies, this is a clear path forward. It helps manage the chaos internally and frees up resources for mobile apps — which are becoming, for some of us, more important than the website itself.
Language pullback. What? Companies are going to drop languages? That’s right. Some that I’ve spoken to already have dropped a language or two, and others are considering following along. I’m never a fan of dropping languages for budgetary reasons, as this is almost always a shortsighted decision, but it’s a fact of life as companies learn to align their language strategies with their budgets. In the end, pullbacks are far from ideal but probably a sign that companies are no longer making blind assumptions that adding languages will automatically increased sales (this isn’t always the case). So even this trend, while minor, is ultimately going to be a positive one.
Privacy becomes a selling point. The “NSA-gate” scandal is only just beginning to be felt around the world. And the threat to American-based tech companies is very real. I will not be surprised if Google or Microsoft announces non-US hosted services (to bypass the NSA’s grip and attempt to rebuild trust with consumers). And there are already a number of startups emerging in various countries promising to keep user data safe from the “evil” American intelligence agencies. You know this is a serious issue when Apple and Google and Microsoft (and other tech companies) all agree on something.
A non-Latin gTLD awakens American companies. I’ve long written about why I think the Internet is still broken for non-English speakers. But now that ICANN is moving ahead with delegation of generic TLDs, I believe that one (or more) of these domains will act as a wake-up call to those companies that have long overlooked them — and I’m including a number of Silicon Valley software companies as well. I don’t want to predict what domain I think it will be (they are all available for you to see) — let me know if you have a candidate.
Apple drops flags from its global gateway. True, this is not my first prediction along these lines. But do I think 2014 will be the year. And this will make my life a bit easier because I won’t have to respond to any more “But Apple is using flags so why can’t we” questions.
So what do you think about the year ahead?
If you have any predictions to share, please let me know.
Every translation vendor offers the highest-quality translations.
Or so they say.
But how do you know for sure that one translation is better than another translation?
And, for that matter, how do you fairly benchmark machine translation engines?
TAUS has worked on this challenge for the past three years along with a diverse network of translation vendors and buyers, including Intel, Adobe, Google, Lionbridge, and Moravia (among many others).
They’ve developed something they call the Dynamic Quality Framework (DQF) and they took it live earlier this month with a website, knowledgebase and evaluation tools.
To learn more, I recently interviewed TAUS founder and director Jaap van der Meer.
Q: Why is a translation quality framework needed?
In 2009 and 2010 we did a number of workshops with large enterprises with the objective to better understand the changing landscape for translation and localization services. As part of these sessions we always do a SWOT analysis and consistently quality assurance and translation quality popped up on the negative side of the charts: as weaknesses and threats. All the enterprises we worked with mentioned that the lack of clarity on translation quality led to disputes, delays and extra costs in the localization process. Our members asked us to investigate this area further and to assess the possibilities for establishing a translation quality framework.
Q: You have an impressive list of co-creators. It seems that you’ve really built up momentum for this service. Were there any key drivers for this wave of interest and involvement?
Well, on top of the fact that translation quality was already not well defined ever since there is a translation industry, the challenges in the last few years have become so much greater because of the emergence of new content types and the increasing interest in technology and translation automation.
Q: What if the source content is poorly written (full of grammatical errors, passive voice, run-on sentences). How does the DQF take this into account?
We work with a user group that meets every two months and reviews new user requirements. Assessing source content quality has come up as a concern of course and we are studying now how to take this into account in the Dynamic Quality Framework.
Q: Do you have any early success stories to share of how this framework has helped companies improve quality or efficiency?
We have a regular user base now of some 100 companies. They use DQF primarily to get an objective assessment of the quality of their MT systems. Before they worked with BLEU scores only, which is really not very helpful in a practical environment and not a real measurement for the usability of translations. Also many companies work with review comments from linguists which tend to be subjective and biased.
Q: How can other companies take part? Do they need to be TAUS members?
Next month (December) we will start making the DQF tools and knowledge bases available for non-members. Users will then be able to sign up for just one month (to try it out) or for a year without becoming members of TAUS.
Q: The DQF can be applied not only to the more structure content used in documentation and knowledgebases but also marketing content. How do you measure quality when content must be liberally transcreated into the target language? And what value does the DQF offer for this type of scenario?
We have deliberately chosen the name “Dynamic” Quality Framework, because of the many variables that determine how to evaluate the quality. The type of content is one of the key variables indeed. An important component of the Dynamic Quality Framework is an online wizard to profile the user’s content and to decide – based on that content profile – which evaluation technique and tool to use. For marketing text this will be very different than for instructions for use.
Q: Do you see DQF having an impact on the creation of source content as well?
Yes, even today the adequacy and fluency evaluation tools – that are part of DQF – could already be applied to source content. But as we proceed working with our user group to add features and improve the platform we will ‘dynamically’ evolve to become more effective for source content quality evaluation as well.
Q: An argument against quality benchmarks is that they can be used to suck the life (or art) out of text (both source and translated text). What would you say in response to this?
No, I don’t think so. You must realize that DQF is not a mathematical approach to assessing quality and only counting errors (as most professionals in the industry have been doing for the longest time now with the old LISA QA model or derivatives thereof). For a nice and lively marketing text the DQF content profiler will likely recommend a ‘community feedback’ type of evaluation.
Q: Where do you see the DQF five years from now in terms of functionality?
Our main focus is now on integration and reporting. Next year we will provide the APIs that allow users to integrate DQF in their own editors and localization workflows. This will make it so much easier for a much larger group of users to add DQF to their day-to-day production environment. In our current release we provide many different reports for users, but what we like to do next year is allow users to define their own reports and views of the data in a personalized dashboard.