Building accurate translation-tailored large language models with language-aware instruction tuning

Changtong ZAN , Liang DING , Li SHEN , Yibing ZHAN , Xinghao YANG , Weifeng LIU

Front. Inform. Technol. Electron. Eng ›› 2025, Vol. 26 ›› Issue (8) : 1341 -1355.

PDF (2559KB)
Front. Inform. Technol. Electron. Eng ›› 2025, Vol. 26 ›› Issue (8) : 1341 -1355. DOI: 10.1631/FITEE.2400458
Research Article

Building accurate translation-tailored large language models with language-aware instruction tuning

Author information +
History +
PDF (2559KB)

Abstract

Large language models (LLMs) exhibit remarkable capabilities in various natural language processing tasks, such as machine translation. However, the large number of LLM parameters incurs significant costs during inference. Previous studies have attempted to train translation-tailored LLMs with moderately sized models by fine-tuning them on the translation data. Nevertheless, when performing translations in zero-shot directions that are absent from the fine-tuning data, the problem of ignoring instructions and thus producing translations in the wrong language (i.e., the off-target translation issue) remains unresolved. In this work, we design a two-stage fine-tuning algorithm to improve the instruction-following ability of translation-tailored LLMs, particularly for maintaining accurate translation directions. We first fine-tune LLMs on the translation data to elicit basic translation capabilities. At the second stage, we construct instruction-conflicting samples by randomly replacing the instructions with the incorrect ones. Then, we introduce an extra unlikelihood loss to reduce the probability assigned to those samples. Experiments on two benchmarks using the LLaMA 2 and LLaMA 3 models, spanning 16 zero-shot directions, demonstrate that, compared to the competitive baseline—translation-finetuned LLaMA, our method could effectively reduce the off-target translation ratio (up to −62.4 percentage points), thus improving translation quality (up to +9.7 bilingual evaluation understudy). Analysis shows that our method can preserve the model's performance on other tasks, such as supervised translation and general tasks. Code is released at https://github.com/alphadl/LanguageAware_Tuning.

Keywords

Zero-shot machine translation / Off-target issue / Large language model / Language-aware instruction tuning / Instruction-conflicting sample

Cite this article

Download citation ▾
Changtong ZAN, Liang DING, Li SHEN, Yibing ZHAN, Xinghao YANG, Weifeng LIU. Building accurate translation-tailored large language models with language-aware instruction tuning. Front. Inform. Technol. Electron. Eng, 2025, 26(8): 1341-1355 DOI:10.1631/FITEE.2400458

登录浏览全文

4963

注册一个新账户 忘记密码

References

RIGHTS & PERMISSIONS

Zhejiang University Press

AI Summary AI Mindmap
PDF (2559KB)

Supplementary files

FITEE-1341-25006-CTZ_suppl_1

FITEE-1341-25006-CTZ_suppl_2

83

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/