Full Text:  <1067>

CLC number: TP391

On-line Access: 2025-06-04

Received: 2024-03-30

Revision Accepted: 2024-11-27

Crosschecked: 2025-09-04

Cited: 0

Clicked: 682

Citations:  Bibtex RefMan EndNote GB/T7714

 ORCID:

Changtong ZAN

https://orcid.org/0000-0002-5467-0937

Liang DING

https://orcid.org/0000-0001-8976-2084

Weifeng LIU

https://orcid.org/0000-0002-5388-9080

-   Go to

Article info.
Open peer comments

Frontiers of Information Technology & Electronic Engineering 

Accepted manuscript available online (unedited version)


Building accurate translation-tailored large language models with language-aware instruction tuning


Author(s):  Changtong ZAN, Liang DING, Li SHEN, Yibing ZHAN, Xinghao YANG, Weifeng LIU

Affiliation(s):  College of Control Science and Engineering, China University of Petroleum (East China), Qingdao 266580, China; more

Corresponding email(s):  liangding.liam@gmail.com, liuwf@upc.edu.cn

Key Words:  Zero-shot machine translation; Off-target issue; Large language model; Language-aware instruction tuning; Instruction-conflicting sample


Share this article to: More <<< Previous Paper|Next Paper >>>

Changtong ZAN, Liang DING, Li SHEN, Yibing ZHAN, Xinghao YANG, Weifeng LIU. Building accurate translation-tailored large language models with language-aware instruction tuning[J]. Frontiers of Information Technology & Electronic Engineering,in press.https://doi.org/10.1631/FITEE.2400458

@article{title="Building accurate translation-tailored large language models with language-aware instruction tuning",
author="Changtong ZAN, Liang DING, Li SHEN, Yibing ZHAN, Xinghao YANG, Weifeng LIU",
journal="Frontiers of Information Technology & Electronic Engineering",
year="in press",
publisher="Zhejiang University Press & Springer",
doi="https://doi.org/10.1631/FITEE.2400458"
}

%0 Journal Article
%T Building accurate translation-tailored large language models with language-aware instruction tuning
%A Changtong ZAN
%A Liang DING
%A Li SHEN
%A Yibing ZHAN
%A Xinghao YANG
%A Weifeng LIU
%J Frontiers of Information Technology & Electronic Engineering
%P 1341-1355
%@ 2095-9184
%D in press
%I Zhejiang University Press & Springer
doi="https://doi.org/10.1631/FITEE.2400458"

TY - JOUR
T1 - Building accurate translation-tailored large language models with language-aware instruction tuning
A1 - Changtong ZAN
A1 - Liang DING
A1 - Li SHEN
A1 - Yibing ZHAN
A1 - Xinghao YANG
A1 - Weifeng LIU
J0 - Frontiers of Information Technology & Electronic Engineering
SP - 1341
EP - 1355
%@ 2095-9184
Y1 - in press
PB - Zhejiang University Press & Springer
ER -
doi="https://doi.org/10.1631/FITEE.2400458"


Abstract: 
Large language models (LLMs) exhibit remarkable capabilities in various natural language processing tasks, such as machine translation. However, the large number of LLM parameters incurs significant costs during inference. Previous studies have attempted to train translation-tailored LLMs with moderately sized models by fine-tuning them on the translation data. Nevertheless, when performing translations in zero-shot directions that are absent from the fine-tuning data, the problem of ignoring instructions and thus producing translations in the wrong language (i.e., the off-target translation issue) remains unresolved. In this work, we design a two-stage fine-tuning algorithm to improve the instruction-following ability of translation-tailored LLMs, particularly for maintaining accurate translation directions. We first fine-tune LLMs on the translation data to elicit basic translation capabilities. At the second stage, we construct instruction-conflicting samples by randomly replacing the instructions with the incorrect ones. Then, we introduce an extra unlikelihood loss to reduce the probability assigned to those samples. Experiments on two benchmarks using the LLaMA 2 and LLaMA 3 models, spanning 16 zero-shot directions, demonstrate that, compared to the competitive baseline—translation-finetuned LLaMA, our method could effectively reduce the off-target translation ratio (up to -62.4 percentage points), thus improving translation quality (up to +9.7 bilingual evaluation understudy). Analysis shows that our method can preserve the model’s performance on other tasks, such as supervised translation and general tasks. Code is released at https://github.com/alphadl/LanguageAware_Tuning.

Darkslateblue:Affiliate; Royal Blue:Author; Turquoise:Article

Reference

[1]Brown TB, Mann B, Ryder N, et al., 2020. Language models are few-shot learners.

[2]Chen L, Ma SM, Zhang DD, et al., 2023. On the off-target problem of zero-shot multilingual neural machine translation. In: Rogers A, Boyd-Graber J, Okazaki N (Eds.), Findings of the Association for Computational Linguistics. Association for Computational Linguistics, Toronto, Canada, p.9542-9558.

[3]Cho K, van Merrienboer B, Gulcehre C, et al., 2014. Learning phrase representations using RNN encoder–decoder for statistical machine translation. Proc Conf on Empirical Methods in Natural Language Processing, p.1724-1734.

[4]Chung HW, Hou L, Longpre S, et al., 2024. Scaling instruction-finetuned language models. J Mach Learn Res, 25(70):1-53.

[5]Dabre R, Kurohashi S, 2019. MMCR4NLP: multilingual multiway corpora repository for natural language processing. https://arxiv.org/abs/1710.01025

[6]Devlin J, Chang MW, Lee K, et al., 2019. BERT: pre-training of deep bidirectional Transformers for language understanding. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.4171-4186.

[7]Feng ZP, Chen RZ, Zhang Y, et al., 2024. Ladder: a model-agnostic framework boosting LLM-based machine translation to the next level.

[8]Fu Y, Peng H, Ou LT, et al., 2023. Specializing smaller language models towards multi-step reasoning. Proc 40th Int Conf on Machine Learning, p.10421-10430.

[9]Gu JT, Wang Y, Cho K, et al., 2019. Improved zero-shot neural machine translation via ignoring spurious correlations. Proc 57th Annual Meeting of the Association for Computational Linguistics, p.1258-1268.

[10]Hendy A, Abdelrehim M, Sharaf A, et al., 2023. How good are GPT models at machine translation? A comprehensive evaluation.

[11]Hosseini A, Reddy S, Bahdanau D, et al., 2021. Understanding by understanding not: modeling negation in language models. Proc Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, p.1301-1312.

[12]Hu MT, Bai YH, Wu YK, et al., 2023. Uncertainty-aware unlikelihood learning improves generative aspect sentiment quad prediction. In: Rogers A, Boyd-Graber J, Okazaki N (Eds.), Findings of the Association for Computational Linguistics. Association for Computational Linguistics, Toronto, Canada, p.13481-13494.

[13]Huang YX, Gu HL, Yu ZT, et al., 2024. Enhancing low-resource cross-lingual summarization from noisy data with fine-grained reinforcement learning. Front Inform Technol Electron Eng, 25(1):121-134.

[14]Jiao WX, Huang JT, Wang WX, et al., 2023. ParroT: translating during chat using large language models tuned with human translation and feedback.

[15]Joulin A, Grave E, Bojanowski P, et al., 2016a. Bag of tricks for efficient text classification.

[16]Joulin A, Grave E, Bojanowski P, et al., 2016b. FastText.zip: compressing text classification models.

[17]Kaplan J, McCandlish S, Henighan T, et al., 2020. Scaling laws for neural language models.

[18]Kwon W, Li ZH, Zhuang SY, et al., 2023. Efficient memory management for large language model serving with PagedAttention. Proc 29th Symp on Operating Systems Principles, p.611-626.

[19]Li B, Yang P, Sun YK, et al., 2024. Advances and challenges in artificial intelligence text generation. Front Inform Technol Electron Eng, 25(1):64-83.

[20]Li JH, Zhou H, Huang SJ, et al., 2024. Eliciting the translation ability of large language models via multilingual finetuning with translation instructions. Trans Assoc Comput Linguist, 12:576-592.

[21]Li M, Roller S, Kulikov I, et al., 2020. Don’t say that! Making inconsistent dialogue unlikely with unlikelihood training. Proc 58th Annual Meeting of the Association for Computational Linguistics, p.4715-4728.

[22]Liu YH, Gu JT, Goyal N, et al., 2020. Multilingual denoising pre-training for neural machine translation. Trans Assoc Comput Linguist, 8:726-742.

[23]Liu YJ, Zeng XF, Meng FD, et al., 2023. Instruction position matters in sequence generation with large language models.

[24]Lu QY, Qiu BP, Ding L, et al., 2024. Error analysis prompting enables human-like translation evaluation in large language models.

[25]Miao YC, Zhang S, Ding L, et al., 2024. InfoRM: mitigating reward hacking in RLHF via information-theoretic reward modeling.

[26]Min BN, Ross H, Sulem E, et al., 2023. Recent advances in natural language processing via large pre-trained language models: a survey. ACM Comput Surv, 56(2):1-40.

[27]Mishra S, Khashabi D, Baral C, et al., 2022. Cross-task generalization via natural language crowdsourcing instructions. Proc 60th Annual Meeting of the Association for Computational Linguistics, p.3470-3487.

[28]Nogueira dos Santos C, Ma XF, Nallapati R, et al., 2020. Beyond [CLS] through ranking by generation. Proc Conf on Empirical Methods in Natural Language Processing, p.1722-1727.

[29]OpenAI, 2024. GPT-4 technical report.

[30]Peng KQ, Ding L, Zhong QH, et al., 2023. Towards making the most of ChatGPT for machine translation. In: Bouamor H, Pino J, Bali K (Eds.), Findings of the Association for Computational Linguistics. Association for Computational Linguistics, Singapore, p.5622-5633.

[31]Post M, 2018. A call for clarity in reporting BLEU scores. Proc 3rd Conf on Machine Translation: Research Papers, p.186-191.

[32]Qu Z, Watanabe T, 2022. Adapting to non-centered languages for zero-shot multilingual translation. Proc 29th Int Conf on Computational Linguistics, p.5251-5265. https://aclanthology.org/2022.coling-1.467

[33]Ren ZY, Zhan YB, Yu BS, et al., 2024. Healthcare Copilot: eliciting the power of general LLMs for medical consultation.

[34]Sennrich R, Vamvas J, Mohammadshahi A, 2024. Mitigating hallucinations and off-target machine translation with source-contrastive and language-contrastive decoding.

[35]Stap D, Hasler E, Byrne B, et al., 2024. The fine-tuning paradox: boosting translation quality without sacrificing LLM abilities. Proc 62nd Annual Meeting of the Association for Computational Linguistics, p.6189-6206.

[36]Touvron H, Lavril T, Izacard G, et al., 2023a. LLaMA: open and efficient foundation language models.

[37]Touvron H, Martin L, Stone K, et al., 2023b. LLaMA 2: open foundation and fine-tuned chat models.

[38]Wang S, Ding L, Shen L, et al., 2024. OOP: object-oriented programming evaluation benchmark for large language models.

[39]Wang WB, Ding L, Shen L, et al., 2024. WisdoM: improving multimodal sentiment analysis by fusing contextual world knowledge.

[40]Wang YM, Zhang ZS, Wang R, 2023. Element-aware summarization with large language models: expert-aligned evaluation and chain-of-thought method. Proc 61st Annual Meeting of the Association for Computational Linguistics, p.8640-8665.

[41]Wang YZ, Kordi Y, Mishra S, et al., 2023. Self-instruct: aligning language models with self-generated instructions. Proc 61st Annual Meeting of the Association for Computational Linguistics, p.13484-13508.

[42]Wei J, Bosma M, Zhao VY, et al., 2022. Finetuned language models are zero-shot learners.

[43]Wei J, Wang XZ, Schuurmans D, et al., 2023. Chain-of-thought prompting elicits reasoning in large language models.

[44]Welleck S, Kulikov I, Roller S, et al., 2019. Neural text generation with unlikelihood training.

[45]Wolf T, Debut L, Sanh V, et al., 2020. Transformers: state-of-the-art natural language processing. Proc Conf on Empirical Methods in Natural Language Processing: System Demonstrations, p.38-45.

[46]Xu HR, Sharaf A, Chen YM, et al., 2024a. Contrastive preference optimization: pushing the boundaries of LLM performance in machine translation.

[47]Xu HR, Kim YJ, Sharaf A, et al., 2024b. A paradigm shift in machine translation: boosting translation performance of large language models.

[48]Xu ZY, Peng KQ, Ding L, et al., 2024. Take care of your prompt bias! Investigating and mitigating prompt bias in factual knowledge extraction. Proc Joint Int Conf on Computational Linguistics, Language Resources and Evaluation, p.15552-15665. https://aclanthology.org/2024.lrec-main.1352

[49]Zan CT, Peng KQ, Ding L, et al., 2022. Vega-MT: the JD explore academy machine translation system for WMT22. Proc 7th Conf on Machine Translation, p.411-422.

[50]Zan CT, Ding L, Shen L, et al., 2023. Unlikelihood tuning on negative samples amazingly improves zero-shot translation.

[51]Zeng JL, Meng FD, Yin YJ, et al., 2024. TIM: teaching large language models to translate with comparison.

[52]Zhang HB, Chen Q, Zhang WW, 2022. Improving entity linking with two adaptive features. Front Inform Technol Electron Eng, 23(11):1620-1630.

[53]Zhang HB, Chen KH, Bai XF, et al., 2024. Paying more attention to source context: mitigating unfaithful translations from large language model. In: Ku LW, Martins A, Srikumar V (Eds.), Findings of the Association for Computational Linguistics. Association for Computational Linguistics, Bangkok, Thailand, p.13816-13836.

[54]Zhang SS, Roller S, Goyal N, et al., 2022. OPT: open pre-trained Transformer language models.

[55]Zhang Y, Li YF, Cui LY, et al., 2023. Siren’s song in the AI ocean: a survey on hallucination in large language models.

[56]Zhang YQ, Ding L, Zhang LF, et al., 2024. Intention analysis makes LLMs a good jailbreak defender.

[57]Zhong QH, Ding L, Liu JH, et al., 2023. Can ChatGPT understand too? A comparative study on ChatGPT and fine-tuned BERT.

[58]Zhong QH, Ding L, Liu JH, et al., 2024. Rose doesn’t do that: boosting the safety of instruction-tuned large language models with reverse prompt contrastive decoding.

Open peer comments: Debate/Discuss/Question/Opinion

<1>

Please provide your name, email address and a comment





Journal of Zhejiang University-SCIENCE, 38 Zheda Road, Hangzhou 310027, China
Tel: +86-571-87952783; E-mail: cjzhang@zju.edu.cn
Copyright © 2000 - 2025 Journal of Zhejiang University-SCIENCE