Summary
Q: What problem does translation AI like AQE and APE solve?
A: AI helps organisations translate more content quickly and affordably while improving quality. It reduces editor fatigue and ensures localisation scales across global markets.
Q: How did LanguageLine and TAUS EPIC perform in testing?
A: In software, EPIC’s AQE accuracy matched human reviewers with error rates as low as 1 per cent. APE improved two-thirds of edited segments, boosting overall quality scores.
Q: What is the main takeaway about translation AI for global businesses?
A: AI works best as a partner. AQE and APE cut costs, speed workflows, and support linguists, but human expertise remains critical for context and nuanced translation.
Testing the Power of AQE and APE with TAUS EPIC
This blog is part of a LanguageLine series where our Solutions Architects share real-world insights from client implementations, platform evaluations, and technical research.
The language services industry faces the same tension: how to use AI to handle exponentially more content, faster and cheaper, without sacrificing quality.
Demand for localisation is skyrocketing, but budgets and timelines aren’t expanding to match. Adding more human translators doesn’t scale. That’s why we’re investing in technologies like Automated Quality Estimation (AQE) and Automated Post-Editing (APE). These tools can predict translation quality and refine output before humans ever see it.
But theory and practice differ. To test what’s possible, we partnered with TAUS and their AI-powered platform, EPIC.
Post-editors often face machine-translated documents without knowing which sentences need serious work. AQE changes that by scoring each segment, creating a triage system so editors know where to focus.
APE goes further by cleaning up obvious MT errors - grammar, terminology, and structure - before human review. The payoff: consistent output, faster turnaround, and less editor fatigue. With content volumes exploding, intelligent automation is becoming essential.
While building our own AQE, we evaluated third-party options. TAUS’s EPIC stood out for combining AQE and APE in one solution. Their case studies suggest EPIC can reduce post-editing costs by up to 80%.
We knew a generic test wouldn’t match a fully customised deployment, but it could provide benchmarks.
We tested two high-stakes domains: software and health care.
Foundation: We used approved, human-translated content as a benchmark, then machine-translated 500 segments per domain.
Work flow: Segments went through AQE scoring, with a high threshold of 1.0, ensuring every segment also passed through APE.
Validation: Human linguists then reviewed both raw MT and APE-enhanced output for accuracy and fluency.
AQE Accuracy: EPIC’s AQE aligned strongly with human evaluation for software. German and Italian reached error rates as low as 1%, Spanish at 3%, and French at 5%. TAUS benchmarks show that an AQE threshold of 0.85 identifies “good” content 96% of the time.
Health care was more challenging. Spanish error rates reached 20%, mainly because EPIC evaluates segments individually, while humans consider broader context. Context remains a critical challenge in segment-based MT workflows.
APE Evaluation: EPIC auto-edited about 10% of segments. Two-thirds of those improved enough that human reviewers judged them equal or superior to raw MT. Every post-edited segment scored higher on AQE, showing consistent quality gains.
Benchmarks are useful, but real value comes from deployment. Our roadmap focuses on:
This project reinforced a core belief: AI doesn’t replace linguists; it empowers them. EPIC and similar platforms make human translation smarter and faster when integrated thoughtfully, with clear awareness of their limits.
We’re continuing to test, tune, and expand these capabilities, always keeping human expertise at the center.
If you’re evaluating platforms like EPIC, designing a risk-based strategy, or building a custom MT program, our team can help. Contact your LanguageLine Business Account Manager or email us at translation@languageline.co.uk
Amanda Downing is a Solutions Architect with LanguageLine Solutions.