Measuring translation quality: A Q&A with TAUS founder Jaap van der Meer

Every translation vendor offers the highest-quality translations.

Or so they say.

But how do you know for sure that one translation is better than another translation?

And, for that matter, how do you fairly benchmark machine translation engines?

TAUS has worked on this challenge for the past three years along with a diverse network of translation vendors and buyers, including Intel, Adobe, Google, Lionbridge, and Moravia (among many others).

They’ve developed something they call the Dynamic Quality Framework (DQF) and they took it live earlier this month with a website, knowledgebase and evaluation tools.

TAUS DQF

To learn more, I recently interviewed TAUS founder and director Jaap van der Meer.

Q: Why is a translation quality framework needed?
In 2009 and 2010 we did a number of workshops with large enterprises with the objective to better understand the changing landscape for translation and localization services. As part of these sessions we always do a SWOT analysis and consistently quality assurance and translation quality popped up on the negative side of the charts: as weaknesses and threats. All the enterprises we worked with mentioned that the lack of clarity on translation quality led to disputes, delays and extra costs in the localization process. Our members asked us to investigate this area further and to assess the possibilities for establishing a translation quality framework.

Q: You have an impressive list of co-creators. It seems that you’ve really built up momentum for this service. Were there any key drivers for this wave of interest and involvement?
Well, on top of the fact that translation quality was already not well defined ever since there is a translation industry, the challenges in the last few years have become so much greater because of the emergence of new content types and the increasing interest in technology and translation automation.

Q: What if the source content is poorly written (full of grammatical errors, passive voice, run-on sentences). How does the DQF take this into account?
We work with a user group that meets every two months and reviews new user requirements. Assessing source content quality has come up as a concern of course and we are studying now how to take this into account in the Dynamic Quality Framework.

Q: Do you have any early success stories to share of how this framework has helped companies improve quality or efficiency?
We have a regular user base now of some 100 companies. They use DQF primarily to get an objective assessment of the quality of their MT systems. Before they worked with BLEU scores only, which is really not very helpful in a practical environment and not a real measurement for the usability of translations. Also many companies work with review comments from linguists which tend to be subjective and biased.

Q: How can other companies take part? Do they need to be TAUS members?
Next month (December) we will start making the DQF tools and knowledge bases available for non-members. Users will then be able to sign up for just one month (to try it out) or for a year without becoming members of TAUS.

Q: The DQF can be applied not only to the more structure content used in documentation and knowledgebases but also marketing content. How do you measure quality when content must be liberally transcreated into the target language? And what value does the DQF offer for this type of scenario?
We have deliberately chosen the name “Dynamic” Quality Framework, because of the many variables that determine how to evaluate the quality. The type of content is one of the key variables indeed. An important component of the Dynamic Quality Framework is an online wizard to profile the user’s content and to decide – based on that content profile – which evaluation technique and tool to use. For marketing text this will be very different than for instructions for use.

Q: Do you see DQF having an impact on the creation of source content as well?
Yes, even today the adequacy and fluency evaluation tools – that are part of DQF – could already be applied to source content. But as we proceed working with our user group to add features and improve the platform we will ‘dynamically’ evolve to become more effective for source content quality evaluation as well.

Q: An argument against quality benchmarks is that they can be used to suck the life (or art) out of text (both source and translated text). What would you say in response to this?
No, I don’t think so. You must realize that DQF is not a mathematical approach to assessing quality and only counting errors (as most professionals in the industry have been doing for the longest time now with the old LISA QA model or derivatives thereof). For a nice and lively marketing text the DQF content profiler will likely recommend a ‘community feedback’ type of evaluation.

Q: Where do you see the DQF five years from now in terms of functionality?
Our main focus is now on integration and reporting. Next year we will provide the APIs that allow users to integrate DQF in their own editors and localization workflows. This will make it so much easier for a much larger group of users to add DQF to their day-to-day production environment. In our current release we provide many different reports for users, but what we like to do next year is allow users to define their own reports and views of the data in a personalized dashboard.

TAUS Link

(Visited 156 times, 1 visits today)

Comments are closed.