quickmt vs. libretranslate

libretranslate is a popular (13k stars on Github!) library enabling users to self-host (or do offline) translation for many different languages. The neural machine translation component of the system is provided by argos translate (5k stars on Github). libretranslate and argostranslate are both quite easy to get up and running with and the libretranslate user interface is quite slick but neither of these libraries are quick.

In this short article we examine the quality and speed of libretranslate in comparison with quickmt.

In all of the experiments we are translating the flores-devtest test set. This is a popular general-domain machine translation test set for evaluation between any of 200 different languages including 1012 segments (sentences or short paragraphs).

All of the quickmt models are available on Huggingface, and the libretranslate models are available from argos translate.

libretranslate and quickmt were both configured to run on an Nvidia RTX 4070s with batch size 32, and libretranslate was launched withe the following command:

ARGOS_DEVICE_TYPE=cuda libretranslate --load-only pt,de,fa,cs,tr,es,pl,bn,vi,fr,it,id,hu,th,lv,ko,ar,en,zh,ro,hi,he,ja,da,el,ur,ru 

Speed Comparison

quickmt is considerably faster than libretranslate:

Code
import pandas as pd
from plotly import express as px

df_data = {'index': {0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9, 10: 10, 11: 11, 12: 12, 13: 13, 14: 14, 15: 15, 16: 16, 17: 17, 18: 18, 19: 19, 20: 20, 21: 21, 22: 22, 23: 23, 24: 24, 25: 25, 26: 26, 27: 27, 28: 28, 29: 29, 30: 30, 31: 31, 32: 32, 33: 33, 34: 34, 35: 35, 36: 36, 37: 37, 38: 38, 39: 39, 40: 40, 41: 41, 42: 42, 43: 43, 44: 44, 45: 45, 46: 46, 47: 47, 48: 48, 49: 49, 50: 50, 51: 51, 52: 0, 53: 1, 54: 2, 55: 3, 56: 4, 57: 5, 58: 6, 59: 7, 60: 8, 61: 9, 62: 10, 63: 11, 64: 12, 65: 13, 66: 14, 67: 15, 68: 16, 69: 17, 70: 18, 71: 19, 72: 20, 73: 21, 74: 22, 75: 23, 76: 24, 77: 25, 78: 26, 79: 27, 80: 28, 81: 29, 82: 30, 83: 31, 84: 32, 85: 33, 86: 34, 87: 35, 88: 36, 89: 37, 90: 38, 91: 39, 92: 40, 93: 41, 94: 42, 95: 43, 96: 44, 97: 45, 98: 46, 99: 47, 100: 48, 101: 49, 102: 50}, 'time': {0: 1.2328033447, 1: 1.7316634655, 2: 1.1428749561, 3: 1.2016947269, 4: 1.1951999664, 5: 1.2683308125, 6: 1.0825681686, 7: 1.2144613266, 8: 1.2252991199, 9: 1.2541656494, 10: 1.1193022728, 11: 1.0900781155, 12: 1.0908696651, 13: 1.1799731255, 14: 1.1092069149000001, 15: 1.3009929657, 16: 1.1924922466, 17: 1.321138382, 18: 1.404417038, 19: 1.2471661568, 20: 1.0757014751, 21: 1.1705803871, 22: 1.1219103336, 23: 1.2237184048, 24: 1.1639976501, 25: 1.1322209834999999, 26: 1.0975823402, 27: 1.0930819511, 28: 1.121234417, 29: 1.4847657681, 30: 1.1623337269, 31: 1.2030494213, 32: 1.2224872112, 33: 1.2217383385, 34: 1.0560030937, 35: 0.9569888115, 36: 1.1156563759, 37: 1.2865319252, 38: 1.1470780373, 39: 1.2559833527, 40: 1.2100195885, 41: 1.1664426327, 42: 1.2318534851, 43: 1.2992098331, 44: 1.4252364635, 45: 2.0381476879, 46: 1.3772408962, 47: 1.0434963703, 48: 1.0937628746, 49: 1.1264841557, 50: 1.1113624573, 51: 1.2100260258, 52: 52.0006783009, 53: 47.4294705391, 54: 48.9193472862, 55: 51.4381048679, 56: 51.9525535107, 57: 51.6938693523, 58: 50.0207080841, 59: 54.0675241947, 60: 50.6869974136, 61: 47.6485154629, 62: 50.6348552704, 63: 48.9413232803, 64: 49.865562439, 65: 53.2831482887, 66: 57.5475423336, 67: 52.2292399406, 68: 56.8881397247, 69: 51.0274362564, 70: 53.1056745052, 71: 50.9684550762, 72: 52.7217481136, 73: 48.5589823723, 74: 51.3466742039, 75: 49.7405352592, 76: 45.0998287201, 77: 49.0215342045, 78: 45.2211201191, 79: 53.4577124119, 80: 50.652441263200004, 81: 50.4762699604, 82: 51.7814288139, 83: 51.7295336723, 84: 52.9573953152, 85: 49.7250933647, 86: 46.9384317398, 87: 50.8940937519, 88: 56.0273029804, 89: 47.4176292419, 90: 53.34983325, 91: 51.4188220501, 92: 52.6518859863, 93: 52.7884061337, 94: 76.0510730743, 95: 51.4100003242, 96: 58.9121079445, 97: 49.9179136753, 98: 48.0125050545, 99: 47.6589736938, 100: 51.1337733269, 101: 55.2462155819, 102: 58.4858531952}, 'src_lang': {0: 'ko', 1: 'en', 2: 'it', 3: 'en', 4: 'es', 5: 'en', 6: 'hu', 7: 'en', 8: 'zh', 9: 'en', 10: 'ar', 11: 'en', 12: 'hi', 13: 'en', 14: 'vi', 15: 'en', 16: 'ro', 17: 'en', 18: 'pl', 19: 'en', 20: 'pt', 21: 'en', 22: 'de', 23: 'en', 24: 'he', 25: 'en', 26: 'tr', 27: 'en', 28: 'ja', 29: 'en', 30: 'cs', 31: 'en', 32: 'lv', 33: 'en', 34: 'id', 35: 'en', 36: 'el', 37: 'en', 38: 'th', 39: 'en', 40: 'da', 41: 'en', 42: 'ru', 43: 'en', 44: 'fr', 45: 'en', 46: 'bn', 47: 'en', 48: 'fa', 49: 'en', 50: 'ur', 51: 'en', 52: 'ko', 53: 'en', 54: 'it', 55: 'en', 56: 'es', 57: 'en', 58: 'hu', 59: 'en', 60: 'zh', 61: 'en', 62: 'ar', 63: 'en', 64: 'hi', 65: 'en', 66: 'en', 67: 'ro', 68: 'en', 69: 'pl', 70: 'en', 71: 'pt', 72: 'en', 73: 'de', 74: 'en', 75: 'he', 76: 'en', 77: 'tr', 78: 'en', 79: 'ja', 80: 'en', 81: 'cs', 82: 'en', 83: 'lv', 84: 'en', 85: 'id', 86: 'en', 87: 'el', 88: 'en', 89: 'th', 90: 'en', 91: 'da', 92: 'en', 93: 'ru', 94: 'en', 95: 'fr', 96: 'en', 97: 'bn', 98: 'en', 99: 'fa', 100: 'en', 101: 'ur', 102: 'en'}, 'tgt_lang': {0: 'en', 1: 'ko', 2: 'en', 3: 'it', 4: 'en', 5: 'es', 6: 'en', 7: 'hu', 8: 'en', 9: 'zh', 10: 'en', 11: 'ar', 12: 'en', 13: 'hi', 14: 'en', 15: 'vi', 16: 'en', 17: 'ro', 18: 'en', 19: 'pl', 20: 'en', 21: 'pt', 22: 'en', 23: 'de', 24: 'en', 25: 'he', 26: 'en', 27: 'tr', 28: 'en', 29: 'ja', 30: 'en', 31: 'cs', 32: 'en', 33: 'lv', 34: 'en', 35: 'id', 36: 'en', 37: 'el', 38: 'en', 39: 'th', 40: 'en', 41: 'da', 42: 'en', 43: 'ru', 44: 'en', 45: 'fr', 46: 'en', 47: 'bn', 48: 'en', 49: 'fa', 50: 'en', 51: 'ur', 52: 'en', 53: 'ko', 54: 'en', 55: 'it', 56: 'en', 57: 'es', 58: 'en', 59: 'hu', 60: 'en', 61: 'zh', 62: 'en', 63: 'ar', 64: 'en', 65: 'hi', 66: 'vi', 67: 'en', 68: 'ro', 69: 'en', 70: 'pl', 71: 'en', 72: 'pt', 73: 'en', 74: 'de', 75: 'en', 76: 'he', 77: 'en', 78: 'tr', 79: 'en', 80: 'ja', 81: 'en', 82: 'cs', 83: 'en', 84: 'lv', 85: 'en', 86: 'id', 87: 'en', 88: 'el', 89: 'en', 90: 'th', 91: 'en', 92: 'da', 93: 'en', 94: 'ru', 95: 'en', 96: 'fr', 97: 'en', 98: 'bn', 99: 'en', 100: 'fa', 101: 'en', 102: 'ur'}, 'bleu': {0: 27.08, 1: 14.93, 2: 32.09, 3: 30.52, 4: 28.63, 5: 26.64, 6: 35.0, 7: 28.71, 8: 28.74, 9: 2.35, 10: 42.88, 11: 29.6, 12: 39.84, 13: 35.94, 14: 37.57, 15: 43.69, 16: 44.93, 17: 42.29, 18: 27.46, 19: 21.76, 20: 48.68, 21: 50.51, 22: 44.16, 23: 40.15, 24: 45.01, 25: 34.32, 26: 39.46, 27: 32.75, 28: 27.85, 29: 3.59, 30: 39.61, 31: 33.73, 32: 35.3, 33: 31.47, 34: 44.51, 35: 48.68, 36: 35.45, 37: 28.86, 38: 29.32, 39: 9.56, 40: 49.02, 41: 46.61, 42: 34.69, 43: 32.28, 44: 44.35, 45: 50.05, 46: 32.9, 47: 19.26, 48: 37.55, 49: 26.17, 50: 31.48, 51: 20.81, 52: 14.13, 53: 6.11, 54: 28.58, 55: 25.9, 56: 25.63, 57: 23.43, 58: 29.71, 59: 24.3, 60: 20.51, 61: 1.1, 62: 29.36, 63: 17.24, 64: 26.72, 65: 28.76, 66: 33.8, 67: 39.71, 68: 33.06, 69: 26.18, 70: 18.82, 71: 46.33, 72: 45.83, 73: 36.3, 74: 31.75, 75: 32.19, 76: 24.37, 77: 23.9, 78: 18.75, 79: 13.55, 80: 1.85, 81: 35.05, 82: 28.51, 83: 31.1, 84: 29.67, 85: 32.65, 86: 37.75, 87: 31.4, 88: 26.34, 89: 15.33, 90: 1.05, 91: 44.64, 92: 42.43, 93: 36.95, 94: 32.07, 95: 42.0, 96: 47.23, 97: 15.84, 98: 7.92, 99: 26.25, 100: 21.79, 101: 13.29, 102: 13.12}, 'chrf2': {0: 56.24, 1: 36.89, 2: 61.47, 3: 59.73, 4: 58.61, 5: 55.13, 6: 62.41, 7: 59.4, 8: 57.9, 9: 34.5, 10: 67.01, 11: 61.65, 12: 65.0, 13: 59.92, 14: 62.84, 15: 60.76, 16: 69.3, 17: 66.07, 18: 57.18, 19: 52.1, 20: 71.48, 21: 71.75, 22: 68.83, 23: 66.25, 24: 68.39, 25: 62.37, 26: 65.02, 27: 63.78, 28: 56.98, 29: 42.02, 30: 65.63, 31: 60.29, 32: 62.91, 33: 60.5, 34: 68.77, 35: 71.94, 36: 61.89, 37: 54.93, 38: 58.4, 39: 54.79, 40: 71.78, 41: 70.07, 42: 62.31, 43: 59.11, 44: 68.15, 45: 71.31, 46: 59.7, 47: 53.32, 48: 63.34, 49: 54.07, 50: 58.35, 51: 48.55, 52: 42.15, 53: 23.95, 54: 58.81, 55: 56.37, 56: 56.91, 57: 52.71, 58: 58.63, 59: 55.59, 60: 51.7, 61: 29.57, 62: 56.44, 63: 44.69, 64: 55.25, 65: 53.65, 66: 52.98, 67: 66.1, 68: 60.73, 69: 55.67, 70: 49.69, 71: 69.84, 72: 69.27, 73: 63.56, 74: 60.49, 75: 56.77, 76: 53.62, 77: 52.91, 78: 52.15, 79: 42.83, 80: 29.2, 81: 62.47, 82: 56.74, 83: 59.21, 84: 59.08, 85: 60.71, 86: 65.21, 87: 58.94, 88: 53.27, 89: 45.03, 90: 44.97, 91: 68.93, 92: 67.42, 93: 63.11, 94: 59.01, 95: 66.74, 96: 69.45, 97: 42.95, 98: 37.76, 99: 53.3, 100: 49.47, 101: 38.88, 102: 38.33}, 'comet22': {0: 86.01, 1: 87.05, 2: 87.29, 3: 87.54, 4: 86.1, 5: 85.15, 6: 87.76, 7: 88.01, 8: 86.34, 9: 85.3, 10: 87.4, 11: 86.32, 12: 88.76, 13: 79.05, 14: 87.2, 15: 87.52, 16: 89.31, 17: 89.67, 18: 85.04, 19: 87.14, 20: 89.09, 21: 89.23, 22: 88.9, 23: 86.9, 24: 88.31, 25: 87.91, 26: 88.99, 27: 89.42, 28: 87.24, 29: 89.06, 30: 88.17, 31: 88.77, 32: 86.98, 33: 86.22, 34: 89.24, 35: 91.02, 36: 87.29, 37: 88.85, 38: 87.15, 39: 84.52, 40: 90.0, 41: 89.49, 42: 85.96, 43: 87.76, 44: 88.76, 45: 86.99, 46: 86.99, 47: 84.68, 48: 87.76, 49: 85.79, 50: 84.25, 51: 77.67, 52: 72.46, 53: 71.64, 54: 85.24, 55: 83.68, 56: 84.79, 57: 82.75, 58: 85.8, 59: 83.99, 60: 83.07, 61: 83.15, 62: 80.94, 63: 76.66, 64: 82.34, 65: 73.22, 66: 83.76, 67: 87.97, 68: 86.77, 69: 84.12, 70: 84.76, 71: 88.57, 72: 87.99, 73: 86.35, 74: 81.21, 75: 80.46, 76: 81.42, 77: 78.82, 78: 78.31, 79: 76.04, 80: 77.92, 81: 86.71, 82: 85.81, 83: 85.02, 84: 86.18, 85: 85.17, 86: 86.6, 87: 85.84, 88: 87.38, 89: 77.89, 90: 78.78, 91: 88.77, 92: 87.28, 93: 86.44, 94: 88.32, 95: 88.31, 96: 85.97, 97: 76.92, 98: 74.34, 99: 81.26, 100: 80.39, 101: 65.69, 102: 69.6}, 'system': {0: 'quickmt', 1: 'quickmt', 2: 'quickmt', 3: 'quickmt', 4: 'quickmt', 5: 'quickmt', 6: 'quickmt', 7: 'quickmt', 8: 'quickmt', 9: 'quickmt', 10: 'quickmt', 11: 'quickmt', 12: 'quickmt', 13: 'quickmt', 14: 'quickmt', 15: 'quickmt', 16: 'quickmt', 17: 'quickmt', 18: 'quickmt', 19: 'quickmt', 20: 'quickmt', 21: 'quickmt', 22: 'quickmt', 23: 'quickmt', 24: 'quickmt', 25: 'quickmt', 26: 'quickmt', 27: 'quickmt', 28: 'quickmt', 29: 'quickmt', 30: 'quickmt', 31: 'quickmt', 32: 'quickmt', 33: 'quickmt', 34: 'quickmt', 35: 'quickmt', 36: 'quickmt', 37: 'quickmt', 38: 'quickmt', 39: 'quickmt', 40: 'quickmt', 41: 'quickmt', 42: 'quickmt', 43: 'quickmt', 44: 'quickmt', 45: 'quickmt', 46: 'quickmt', 47: 'quickmt', 48: 'quickmt', 49: 'quickmt', 50: 'quickmt', 51: 'quickmt', 52: 'libretranslate', 53: 'libretranslate', 54: 'libretranslate', 55: 'libretranslate', 56: 'libretranslate', 57: 'libretranslate', 58: 'libretranslate', 59: 'libretranslate', 60: 'libretranslate', 61: 'libretranslate', 62: 'libretranslate', 63: 'libretranslate', 64: 'libretranslate', 65: 'libretranslate', 66: 'libretranslate', 67: 'libretranslate', 68: 'libretranslate', 69: 'libretranslate', 70: 'libretranslate', 71: 'libretranslate', 72: 'libretranslate', 73: 'libretranslate', 74: 'libretranslate', 75: 'libretranslate', 76: 'libretranslate', 77: 'libretranslate', 78: 'libretranslate', 79: 'libretranslate', 80: 'libretranslate', 81: 'libretranslate', 82: 'libretranslate', 83: 'libretranslate', 84: 'libretranslate', 85: 'libretranslate', 86: 'libretranslate', 87: 'libretranslate', 88: 'libretranslate', 89: 'libretranslate', 90: 'libretranslate', 91: 'libretranslate', 92: 'libretranslate', 93: 'libretranslate', 94: 'libretranslate', 95: 'libretranslate', 96: 'libretranslate', 97: 'libretranslate', 98: 'libretranslate', 99: 'libretranslate', 100: 'libretranslate', 101: 'libretranslate', 102: 'libretranslate'}}

df = pd.DataFrame(df_data)
df[["system", "time"]].groupby("system").mean()
time
system
libretranslate 51.787339
quickmt 1.216358

It takes just over 1 second to translate 1000 sentences with quickmt, and more than 50 seconds using libretranslate! The libretranslate models were warmed up (loaded) before starting the timer.

Why is there such a large difference in speed?

Both libretranslate and quickmt use ctranslate2 for translation and sentencepiece for tokenization under the hood, but libretranslate uses stanza for sentence splitting while quickmt uses blingfire. While blingfire is not a particularly accurate sentence segmenter, it is dramatically faster than other methods.

Another difference between quickmt and libretranslate is that our models were designed to be quick by having a small number of decoder layers. Most of the libretranslate models are re-packaged Opus-MT models which have 6 decoder layers.

How does quickmt compare to libretranslate in terms of translation quality?

Quality Comparison

We use chrf was calculated with sacrebleu and comet with the comet library and the default model (comet22).

First let’s take a look at translation from the languages supported by quickmt into English:

Code
px.bar(
    df[df.src_lang != "en"],
    x="src_lang",
    y="comet22",
    color="system",
    barmode="group",
    title="XX->EN Machine Translation Quality - comet22 measure",
    range_y=[70, 95],
    height=500
)
Code
px.bar(
    df[df.src_lang != "en"],
    x="src_lang",
    y="chrf2",
    color="system",
    barmode="group",
    title="XX->EN Machine Translation Quality - chrf measure",
    height=500
)

For both chrf and comet22 and for all languages, with the notable exception of Russian (85.96 comet22 for quickmt vs. 86.44 for libretranslate), quickmt is higher quality than libretranslate. The differences are huge for Korean (86.01 vs 72.46 comet22), Turkish (88.99 vs. 78.82 comet22), japanese (87.24 vs. 76.04 comet22), Thai (87.15 vs. 77.79 comet22) and Bengali (86.99 vs. 76.92 comet22).

On average, quickmt is 4.95 higher in comet22 and 7.14 higher in chrf2 than libretranslate for translation into English:

Code
df[df.tgt_lang=="en"][["system", "comet22"]].groupby("system").mean()
comet22
system
libretranslate 82.599600
quickmt 87.549615
Code
df[df.tgt_lang=="en"][["system", "chrf2"]].groupby("system").mean()
chrf2
system
libretranslate 56.313600
quickmt 63.457308

Next we look at translation from English into the languages supported by quickmt:

Code
px.bar(
    df[df.src_lang == "en"],
    x="tgt_lang",
    y="comet22",
    color="system",
    barmode="group",
    title="EN->XX Machine Translation Quality - comet22 measure",
    range_y=[70, 95],
    height=500
)
Code
px.bar(
    df[df.src_lang == "en"],
    x="tgt_lang",
    y="chrf2",
    color="system",
    barmode="group",
    title="EN->XX Machine Translation Quality - chrf2 measure",
    height=500
)

The pattern here is similar - quickmt is higher-quality than libretranslate for all languages (except Russian!), sometimes dramatically but by an average of 4.97 comet22 and 6.38 chrf2.

Code
df[df.tgt_lang!="en"][["system", "comet22"]].groupby("system").mean()
comet22
system
libretranslate 81.841538
quickmt 86.808846
Code
df[df.tgt_lang!="en"][["system", "chrf2"]].groupby("system").mean()
chrf2
system
libretranslate 51.745000
quickmt 58.123077

Conclusion

quickmt is more than 40 times faster than libretranslate (on a RTX 4070s GPU) and quite a bit higher quality (for all languages except Russian).

You can give quickmt a try on Huggingface Spaces, download the models from Huggingface and make it your own by forking our Github repository. Thanks for reading!