Measuring translation quality: A Q&A with TAUS founder Jaap van der Meer

Every translation vendor offers the highest-quality translations.

Or so they say.

But how do you know for sure that one translation is better than another translation?

And, for that matter, how do you fairly benchmark machine translation engines?

TAUS has worked on this challenge for the past three years along with a diverse network of translation vendors and buyers, including Intel, Adobe, Google, Lionbridge, and Moravia (among many others).

They’ve developed something they call the Dynamic Quality Framework (DQF) and they took it live earlier this month with a website, knowledgebase and evaluation tools.

TAUS DQF

To learn more, I recently interviewed TAUS founder and director Jaap van der Meer.

Q: Why is a translation quality framework needed?
In 2009 and 2010 we did a number of workshops with large enterprises with the objective to better understand the changing landscape for translation and localization services. As part of these sessions we always do a SWOT analysis and consistently quality assurance and translation quality popped up on the negative side of the charts: as weaknesses and threats. All the enterprises we worked with mentioned that the lack of clarity on translation quality led to disputes, delays and extra costs in the localization process. Our members asked us to investigate this area further and to assess the possibilities for establishing a translation quality framework.

Q: You have an impressive list of co-creators. It seems that you’ve really built up momentum for this service. Were there any key drivers for this wave of interest and involvement?
Well, on top of the fact that translation quality was already not well defined ever since there is a translation industry, the challenges in the last few years have become so much greater because of the emergence of new content types and the increasing interest in technology and translation automation.

Q: What if the source content is poorly written (full of grammatical errors, passive voice, run-on sentences). How does the DQF take this into account?
We work with a user group that meets every two months and reviews new user requirements. Assessing source content quality has come up as a concern of course and we are studying now how to take this into account in the Dynamic Quality Framework.

Q: Do you have any early success stories to share of how this framework has helped companies improve quality or efficiency?
We have a regular user base now of some 100 companies. They use DQF primarily to get an objective assessment of the quality of their MT systems. Before they worked with BLEU scores only, which is really not very helpful in a practical environment and not a real measurement for the usability of translations. Also many companies work with review comments from linguists which tend to be subjective and biased.

Q: How can other companies take part? Do they need to be TAUS members?
Next month (December) we will start making the DQF tools and knowledge bases available for non-members. Users will then be able to sign up for just one month (to try it out) or for a year without becoming members of TAUS.

Q: The DQF can be applied not only to the more structure content used in documentation and knowledgebases but also marketing content. How do you measure quality when content must be liberally transcreated into the target language? And what value does the DQF offer for this type of scenario?
We have deliberately chosen the name “Dynamic” Quality Framework, because of the many variables that determine how to evaluate the quality. The type of content is one of the key variables indeed. An important component of the Dynamic Quality Framework is an online wizard to profile the user’s content and to decide – based on that content profile – which evaluation technique and tool to use. For marketing text this will be very different than for instructions for use.

Q: Do you see DQF having an impact on the creation of source content as well?
Yes, even today the adequacy and fluency evaluation tools – that are part of DQF – could already be applied to source content. But as we proceed working with our user group to add features and improve the platform we will ‘dynamically’ evolve to become more effective for source content quality evaluation as well.

Q: An argument against quality benchmarks is that they can be used to suck the life (or art) out of text (both source and translated text). What would you say in response to this?
No, I don’t think so. You must realize that DQF is not a mathematical approach to assessing quality and only counting errors (as most professionals in the industry have been doing for the longest time now with the old LISA QA model or derivatives thereof). For a nice and lively marketing text the DQF content profiler will likely recommend a ‘community feedback’ type of evaluation.

Q: Where do you see the DQF five years from now in terms of functionality?
Our main focus is now on integration and reporting. Next year we will provide the APIs that allow users to integrate DQF in their own editors and localization workflows. This will make it so much easier for a much larger group of users to add DQF to their day-to-day production environment. In our current release we provide many different reports for users, but what we like to do next year is allow users to define their own reports and views of the data in a personalized dashboard.

TAUS Link

Transcreation is here to stay

In 2005, I wrote transcreation is gaining momentum.

I predicted that we’d see a lot more use of this word in the years ahead. Why? Because “translation sounds like a commodity; transcreation sounds like a service.”

So here we are in 2013 and a Google search on Transcreation brings up 392,000 results.

Translators often cringe when hearing this word. And I have often felt the urge to do the same because, frankly, good translators and translation agencies have been providing this service all along.

The idea that literal, word-for-word translation is the only service provided by translators is simply wrong, and to some extent propagated by a translation industry built upon stressing quality (as in literal translation) over more marketing-oriented translation.

So now we have a number of marketing firms and advertising agencies who use this term quite liberally to promote their unique brand of translation services. Here is a screen grab from the website of Hogarth:

Hogarth and Transcreation

By the way, Hogarth is looking to hire a Transcreation Account Manager to “manage the transcreation and production of advertising for major global brands.” Here is the link.

Transcreation is here to stay.

Looking for a translation icon?

If you haven’t visited the Noun Project yet, take a moment and drop by.

It’s a great initiative to provide open source icons. All you have to do is provide attribution according to the Creative Commons license.

I noticed recently the addition of a translations icon.

I believe Microsoft was the first company to develop a translations icon along these lines, which was used as part of Microsoft Office.

Here’s an icon currently in use on the Bing Translator page:

Google quickly followed along with its Google Translate icon, shown here:

(Contact me if there is another company that is using a variation of this translation icon.)

To be clear, I would NOT use this icon as part of a global gateway.

This icon is not about finding localized content — it’s about getting content translated (usually via machine translation).

For the global gateway, I recommend this open source icon:

For more on global gateways, check out The Art of the Global Gateway.

UPDATE: Here’s the machine translation icon used by Yamagata Europe: