Datasets:

bibtex_url
stringlengths
41
53
proceedings
stringlengths
38
50
bibtext
stringlengths
528
3.02k
abstract
stringlengths
17
2.35k
authors
sequencelengths
1
44
title
stringlengths
18
190
id
stringlengths
7
19
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
528 values
n_linked_authors
int64
-1
15
upvotes
int64
-1
77
num_comments
int64
-1
10
n_authors
int64
-1
52
Models
sequencelengths
0
100
Datasets
sequencelengths
0
15
Spaces
sequencelengths
0
46
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
https://aclanthology.org/2023.pandl-1.5.bib
https://aclanthology.org/2023.pandl-1.5/
@inproceedings{chiu-etal-2023-symbolic, title = "Symbolic Planning and Code Generation for Grounded Dialogue", author = "Chiu, Justin and Zhao, Wenting and Chen, Derek and Vaduguru, Saujas and Rush, Alexander and Fried, Daniel", editor = "Surdeanu, Mihai and Riloff, Ellen and Chiticariu, Laura and Frietag, Dayne and Hahn-Powell, Gus and Morrison, Clayton T. and Noriega-Atala, Enrique and Sharp, Rebecca and Valenzuela-Escarcega, Marco", booktitle = "Proceedings of the 2nd Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.pandl-1.5", doi = "10.18653/v1/2023.pandl-1.5", pages = "43--53", abstract = "Large language models (LLMs) excel at processing and generating both text and code. However, LLMs have had limited applicability in grounded task-oriented dialogue as they are difficult to steer toward task objectives and fail to handle novel grounding. We present a modular and interpretable grounded dialogue system that addresses these shortcomings by composing LLMs with a symbolic planner and grounded code execution. Our system consists of a reader and planner: the reader leverages an LLM to convert partner utterances into executable code, calling functions that perform grounding. The translated code{'}s output is stored to track dialogue state, while a symbolic planner determines the next appropriate response. We evaluate our system{'}s performance on the demanding OneCommon dialogue task, involving collaborative reference resolution on abstract images of scattered dots. Our system substantially outperforms the previous state-of-the-art, including improving task success in human evaluations from 56{\%} to 69{\%} in the most challenging setting.", }
Large language models (LLMs) excel at processing and generating both text and code. However, LLMs have had limited applicability in grounded task-oriented dialogue as they are difficult to steer toward task objectives and fail to handle novel grounding. We present a modular and interpretable grounded dialogue system that addresses these shortcomings by composing LLMs with a symbolic planner and grounded code execution. Our system consists of a reader and planner: the reader leverages an LLM to convert partner utterances into executable code, calling functions that perform grounding. The translated code{'}s output is stored to track dialogue state, while a symbolic planner determines the next appropriate response. We evaluate our system{'}s performance on the demanding OneCommon dialogue task, involving collaborative reference resolution on abstract images of scattered dots. Our system substantially outperforms the previous state-of-the-art, including improving task success in human evaluations from 56{\%} to 69{\%} in the most challenging setting.
[ "Chiu, Justin", "Zhao, Wenting", "Chen, Derek", "Vaduguru, Saujas", "Rush, Alex", "er", "Fried, Daniel" ]
Symbolic Planning and Code Generation for Grounded Dialogue
pandl-1.5
2310.17140
[ "https://github.com/justinchiu/onecommon-gpt" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.pandl-1.6.bib
https://aclanthology.org/2023.pandl-1.6/
@inproceedings{neves-ribeiro-etal-2023-towards, title = "Towards Zero-Shot Frame Semantic Parsing with Task Agnostic Ontologies and Simple Labels", author = "Neves Ribeiro, Danilo and Goetz, Jack and Abdar, Omid and Ross, Mike and Dong, Annie and Forbus, Kenneth and Mohamed, Ahmed", editor = "Surdeanu, Mihai and Riloff, Ellen and Chiticariu, Laura and Frietag, Dayne and Hahn-Powell, Gus and Morrison, Clayton T. and Noriega-Atala, Enrique and Sharp, Rebecca and Valenzuela-Escarcega, Marco", booktitle = "Proceedings of the 2nd Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.pandl-1.6", doi = "10.18653/v1/2023.pandl-1.6", pages = "54--63", abstract = "Frame semantic parsing is an important component of task-oriented dialogue systems. Current models rely on a significant amount training data to successfully identify the intent and slots in the user{'}s input utterance. This creates a significant barrier for adding new domains to virtual assistant capabilities, as creation of this data requires highly specialized NLP expertise. In this work we propose OpenFSP, a framework that allows for easy creation of new domains from a handful of simple labels that can be generated without specific NLP knowledge. Our approach relies on creating a small, but expressive, set of domain agnostic slot types that enables easy annotation of new domains. Given such annotation, a matching algorithm relying on sentence encoders predicts the intent and slots for domains defined by end-users. Experiments on the TopV2 dataset shows that our model trained on these simple labels have strong performance against supervised baselines.", }
Frame semantic parsing is an important component of task-oriented dialogue systems. Current models rely on a significant amount training data to successfully identify the intent and slots in the user{'}s input utterance. This creates a significant barrier for adding new domains to virtual assistant capabilities, as creation of this data requires highly specialized NLP expertise. In this work we propose OpenFSP, a framework that allows for easy creation of new domains from a handful of simple labels that can be generated without specific NLP knowledge. Our approach relies on creating a small, but expressive, set of domain agnostic slot types that enables easy annotation of new domains. Given such annotation, a matching algorithm relying on sentence encoders predicts the intent and slots for domains defined by end-users. Experiments on the TopV2 dataset shows that our model trained on these simple labels have strong performance against supervised baselines.
[ "Neves Ribeiro, Danilo", "Goetz, Jack", "Abdar, Omid", "Ross, Mike", "Dong, Annie", "Forbus, Kenneth", "Mohamed, Ahmed" ]
Towards Zero-Shot Frame Semantic Parsing with Task Agnostic Ontologies and Simple Labels
pandl-1.6
2305.03793
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.pandl-1.7.bib
https://aclanthology.org/2023.pandl-1.7/
@inproceedings{gu-etal-2023-co, title = "Co-evolving data-driven and {NLU}-driven Synthesizers for Generating Code in Domain Growth and Data Scarcity", author = "Gu, Jiasheng and Nan, Zifan and Peng, Zhiyuan and Shen, Xipeng and Xu, Dongkuan", editor = "Surdeanu, Mihai and Riloff, Ellen and Chiticariu, Laura and Frietag, Dayne and Hahn-Powell, Gus and Morrison, Clayton T. and Noriega-Atala, Enrique and Sharp, Rebecca and Valenzuela-Escarcega, Marco", booktitle = "Proceedings of the 2nd Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.pandl-1.7", doi = "10.18653/v1/2023.pandl-1.7", pages = "64--74", abstract = "Natural language programming automatically generates code based on a user{'}s text query. Recent solutions are either data-driven or natural language understanding (NLU)-driven. However, the data-driven synthesizer requires a large number of query-code pairs for training, which hinders its application to low-resource programming languages with growing domains whose functionality and grammar can be actively updated. NLU-driven synthesizers solve this problem, but their code generation is slow and their performance rapidly saturates in the presence of ever-increasing data. In this paper, we propose a circular training framework, Colead, which co-evolves both the data-driven synthesizer and the NLU-driven synthesizer to achieve high-quality code generation in the presence of data scarcity and domain growth. The NLU-driven synthesizer generates query-code pairs to update the data-driven synthesizer, which shares a part of its updated model to improve the NLU-driven synthesizers, enabling the co-evolution of both. Experiments show that Colead gives better results than the baselines in the presence of domain growth and data scarcity, and Colead consistently improves the performance of both data-driven and NLU-driven synthesizers over the co-evolvement.", }
Natural language programming automatically generates code based on a user{'}s text query. Recent solutions are either data-driven or natural language understanding (NLU)-driven. However, the data-driven synthesizer requires a large number of query-code pairs for training, which hinders its application to low-resource programming languages with growing domains whose functionality and grammar can be actively updated. NLU-driven synthesizers solve this problem, but their code generation is slow and their performance rapidly saturates in the presence of ever-increasing data. In this paper, we propose a circular training framework, Colead, which co-evolves both the data-driven synthesizer and the NLU-driven synthesizer to achieve high-quality code generation in the presence of data scarcity and domain growth. The NLU-driven synthesizer generates query-code pairs to update the data-driven synthesizer, which shares a part of its updated model to improve the NLU-driven synthesizers, enabling the co-evolution of both. Experiments show that Colead gives better results than the baselines in the presence of domain growth and data scarcity, and Colead consistently improves the performance of both data-driven and NLU-driven synthesizers over the co-evolvement.
[ "Gu, Jiasheng", "Nan, Zifan", "Peng, Zhiyuan", "Shen, Xipeng", "Xu, Dongkuan" ]
Co-evolving data-driven and NLU-driven Synthesizers for Generating Code in Domain Growth and Data Scarcity
pandl-1.7
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.pandl-1.8.bib
https://aclanthology.org/2023.pandl-1.8/
@inproceedings{cheng-etal-2023-complementary, title = "Complementary Roles of Inference and Language Models in {QA}", author = "Cheng, Liang and Hosseini, Mohammad Javad and Steedman, Mark", editor = "Surdeanu, Mihai and Riloff, Ellen and Chiticariu, Laura and Frietag, Dayne and Hahn-Powell, Gus and Morrison, Clayton T. and Noriega-Atala, Enrique and Sharp, Rebecca and Valenzuela-Escarcega, Marco", booktitle = "Proceedings of the 2nd Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.pandl-1.8", doi = "10.18653/v1/2023.pandl-1.8", pages = "75--91", abstract = "Answering open-domain questions through unsupervised methods poses challenges for both machine-reading (MR) and language model (LM) -based approaches. The MR-based approach suffers from sparsity issues in extracted knowledge graphs (KGs), while the performance of the LM-based approach significantly depends on the quality of the retrieved context for questions. In this paper, we compare these approaches and propose a novel methodology that leverages directional predicate entailment (inference) to address these limitations. We use entailment graphs (EGs), with natural language predicates as nodes and entailment as edges, to enhance parsed KGs by inferring unseen assertions, effectively mitigating the sparsity problem in the MR-based approach. We also show EGs improve context retrieval for the LM-based approach. Additionally, we present a Boolean QA task, demonstrating that EGs exhibit comparable directional inference capabilities to large language models (LLMs). Our results highlight the importance of inference in open-domain QA and the improvements brought by leveraging EGs.", }
Answering open-domain questions through unsupervised methods poses challenges for both machine-reading (MR) and language model (LM) -based approaches. The MR-based approach suffers from sparsity issues in extracted knowledge graphs (KGs), while the performance of the LM-based approach significantly depends on the quality of the retrieved context for questions. In this paper, we compare these approaches and propose a novel methodology that leverages directional predicate entailment (inference) to address these limitations. We use entailment graphs (EGs), with natural language predicates as nodes and entailment as edges, to enhance parsed KGs by inferring unseen assertions, effectively mitigating the sparsity problem in the MR-based approach. We also show EGs improve context retrieval for the LM-based approach. Additionally, we present a Boolean QA task, demonstrating that EGs exhibit comparable directional inference capabilities to large language models (LLMs). Our results highlight the importance of inference in open-domain QA and the improvements brought by leveraging EGs.
[ "Cheng, Liang", "Hosseini, Mohammad Javad", "Steedman, Mark" ]
Complementary Roles of Inference and Language Models in QA
pandl-1.8
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.pandl-1.9.bib
https://aclanthology.org/2023.pandl-1.9/
@inproceedings{steindl-etal-2023-controlled, title = "Controlled Data Augmentation for Training Task-Oriented Dialog Systems with Low Resource Data", author = {Steindl, Sebastian and Sch{\"a}fer, Ulrich and Ludwig, Bernd}, editor = "Surdeanu, Mihai and Riloff, Ellen and Chiticariu, Laura and Frietag, Dayne and Hahn-Powell, Gus and Morrison, Clayton T. and Noriega-Atala, Enrique and Sharp, Rebecca and Valenzuela-Escarcega, Marco", booktitle = "Proceedings of the 2nd Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.pandl-1.9", doi = "10.18653/v1/2023.pandl-1.9", pages = "92--102", abstract = "Modern dialog systems rely on Deep Learning to train transformer-based model architectures. These notoriously rely on large amounts of training data. However, the collection of conversational data is often a tedious and costly process. This is especially true for Task-Oriented Dialogs, where the system ought to help the user achieve specific tasks, such as making reservations. We investigate a controlled strategy for dialog synthesis. Our method generates utterances based on dialog annotations in a sequence-to-sequence manner. Besides exploring the viability of the approach itself, we also explore the effect of constrained beam search on the generation capabilities. Moreover, we analyze the effectiveness of the proposed method as a data augmentation by studying the impact the synthetic dialogs have on training dialog systems. We perform the experiments in multiple settings, simulating various amounts of ground-truth data. Our work shows that a controlled generation approach is a viable method to synthesize Task-Oriented Dialogs, that can in turn be used to train dialog systems. We were able to improve this process by utilizing constrained beam search.", }
Modern dialog systems rely on Deep Learning to train transformer-based model architectures. These notoriously rely on large amounts of training data. However, the collection of conversational data is often a tedious and costly process. This is especially true for Task-Oriented Dialogs, where the system ought to help the user achieve specific tasks, such as making reservations. We investigate a controlled strategy for dialog synthesis. Our method generates utterances based on dialog annotations in a sequence-to-sequence manner. Besides exploring the viability of the approach itself, we also explore the effect of constrained beam search on the generation capabilities. Moreover, we analyze the effectiveness of the proposed method as a data augmentation by studying the impact the synthetic dialogs have on training dialog systems. We perform the experiments in multiple settings, simulating various amounts of ground-truth data. Our work shows that a controlled generation approach is a viable method to synthesize Task-Oriented Dialogs, that can in turn be used to train dialog systems. We were able to improve this process by utilizing constrained beam search.
[ "Steindl, Sebastian", "Sch{\\\"a}fer, Ulrich", "Ludwig, Bernd" ]
Controlled Data Augmentation for Training Task-Oriented Dialog Systems with Low Resource Data
pandl-1.9
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.pandl-1.10.bib
https://aclanthology.org/2023.pandl-1.10/
@inproceedings{gabud-etal-2023-hybrid, title = "A Hybrid of Rule-based and Transformer-based Approaches for Relation Extraction in Biodiversity Literature", author = "Gabud, Roselyn and Lapitan, Portia and Mariano, Vladimir and Mendoza, Eduardo and Pampolina, Nelson and Clari{\~n}o, Maria Art Antonette and Batista-Navarro, Riza", editor = "Surdeanu, Mihai and Riloff, Ellen and Chiticariu, Laura and Frietag, Dayne and Hahn-Powell, Gus and Morrison, Clayton T. and Noriega-Atala, Enrique and Sharp, Rebecca and Valenzuela-Escarcega, Marco", booktitle = "Proceedings of the 2nd Workshop on Pattern-based Approaches to NLP in the Age of Deep Learning", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.pandl-1.10", doi = "10.18653/v1/2023.pandl-1.10", pages = "103--113", abstract = "Relation extraction (RE) is one of the tasks behind many relevant natural language processing (NLP) applications. Exploiting the information hidden in millions of scholarly articles by leveraging NLP, specifically RE, systems could benefit studies in specialized domains, e.g. biomedicine and biodiversity. Although deep learning (DL)-based methods have shown state-of-the-art performance in many NLP tasks including RE, DL for domain-specific RE systems has been hindered by the lack of expert-labeled datasets which are typically required to train such methods. In this paper, we take advantage of the zero-shot (i.e., not requiring any labeled data) capability of pattern-based methods for RE using a rule-based approach, combined with templates for natural language inference (NLI) transformer models. We present our hybrid method for RE that exploits the advantages of both methods, i.e., interpretability of rules and transferability of transformers. Evaluated on a corpus of biodiversity literature with annotated relations, our hybrid method demonstrated an improvement of up to 15 percentage points in recall and best performance over solely rule-based and transformer-based methods with F1-scores ranging from 89.61{\%} to 96.75{\%} for reproductive condition - temporal expression relations, and ranging from 85.39{\%} to 89.90{\%} for habitat - geographic location relations.", }
Relation extraction (RE) is one of the tasks behind many relevant natural language processing (NLP) applications. Exploiting the information hidden in millions of scholarly articles by leveraging NLP, specifically RE, systems could benefit studies in specialized domains, e.g. biomedicine and biodiversity. Although deep learning (DL)-based methods have shown state-of-the-art performance in many NLP tasks including RE, DL for domain-specific RE systems has been hindered by the lack of expert-labeled datasets which are typically required to train such methods. In this paper, we take advantage of the zero-shot (i.e., not requiring any labeled data) capability of pattern-based methods for RE using a rule-based approach, combined with templates for natural language inference (NLI) transformer models. We present our hybrid method for RE that exploits the advantages of both methods, i.e., interpretability of rules and transferability of transformers. Evaluated on a corpus of biodiversity literature with annotated relations, our hybrid method demonstrated an improvement of up to 15 percentage points in recall and best performance over solely rule-based and transformer-based methods with F1-scores ranging from 89.61{\%} to 96.75{\%} for reproductive condition - temporal expression relations, and ranging from 85.39{\%} to 89.90{\%} for habitat - geographic location relations.
[ "Gabud, Roselyn", "Lapitan, Portia", "Mariano, Vladimir", "Mendoza, Eduardo", "Pampolina, Nelson", "Clari{\\~n}o, Maria Art Antonette", "Batista-Navarro, Riza" ]
A Hybrid of Rule-based and Transformer-based Approaches for Relation Extraction in Biodiversity Literature
pandl-1.10
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.splurobonlp-1.1.bib
https://aclanthology.org/2023.splurobonlp-1.1/
@inproceedings{miceli-barone-etal-2023-dialogue, title = "Dialogue-based generation of self-driving simulation scenarios using Large Language Models", author = "Miceli Barone, Antonio Valerio and Innes, Craig and Lascarides, Alex", editor = "Padmakumar, Aishwarya and Inan, Mert and Fan, Yue and Wang, Xin and Alikhani, Malihe", booktitle = "Proceedings of the 3rd Combined Workshop on Spatial Language Understanding and Grounded Communication for Robotics (SpLU-RoboNLP 2023)", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.splurobonlp-1.1", doi = "10.18653/v1/2023.splurobonlp-1.1", pages = "1--12", abstract = "Simulation is an invaluable tool for developing and evaluating controllers for self-driving cars. Current simulation frameworks are driven by highly-specialist domain specific languages, and so a natural language interface would greatly enhance usability. But there is often a gap, consisting of tacit assumptions the user is making, between a concise English utterance and the executable code that captures the user{'}s intent. In this paper we describe a system that addresses this issue by supporting an extended multimodal interaction: the user can follow up prior instructions with refinements or revisions, in reaction to the simulations that have been generated from their utterances so far. We use Large Language Models (LLMs) to map the user{'}s English utterances in this interaction into domain-specific code, and so we explore the extent to which LLMs capture the context sensitivity that{'}s necessary for computing the speaker{'}s intended message in discourse.", }
Simulation is an invaluable tool for developing and evaluating controllers for self-driving cars. Current simulation frameworks are driven by highly-specialist domain specific languages, and so a natural language interface would greatly enhance usability. But there is often a gap, consisting of tacit assumptions the user is making, between a concise English utterance and the executable code that captures the user{'}s intent. In this paper we describe a system that addresses this issue by supporting an extended multimodal interaction: the user can follow up prior instructions with refinements or revisions, in reaction to the simulations that have been generated from their utterances so far. We use Large Language Models (LLMs) to map the user{'}s English utterances in this interaction into domain-specific code, and so we explore the extent to which LLMs capture the context sensitivity that{'}s necessary for computing the speaker{'}s intended message in discourse.
[ "Miceli Barone, Antonio Valerio", "Innes, Craig", "Lascarides, Alex" ]
Dialogue-based generation of self-driving simulation scenarios using Large Language Models
splurobonlp-1.1
2310.17372
[ "https://github.com/avmb/dialogllmscenic" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.1.bib
https://aclanthology.org/2023.wmt-1.1/
@inproceedings{kocmi-etal-2023-findings, title = "Findings of the 2023 Conference on Machine Translation ({WMT}23): {LLM}s Are Here but Not Quite There Yet", author = "Kocmi, Tom and Avramidis, Eleftherios and Bawden, Rachel and Bojar, Ond{\v{r}}ej and Dvorkovich, Anton and Federmann, Christian and Fishel, Mark and Freitag, Markus and Gowda, Thamme and Grundkiewicz, Roman and Haddow, Barry and Koehn, Philipp and Marie, Benjamin and Monz, Christof and Morishita, Makoto and Murray, Kenton and Nagata, Makoto and Nakazawa, Toshiaki and Popel, Martin and Popovi{\'c}, Maja and Shmatova, Mariya", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.1", doi = "10.18653/v1/2023.wmt-1.1", pages = "1--42", abstract = "This paper presents the results of the General Machine Translation Task organised as part of the 2023 Conference on Machine Translation (WMT). In the general MT task, participants were asked to build machine translation systems for any of 8 language pairs (corresponding to 14 translation directions), to be evaluated on test sets consisting of up to four different domains. We evaluate system outputs with professional human annotators using a combination of source-based Direct Assessment and scalar quality metric (DA+SQM).", }
This paper presents the results of the General Machine Translation Task organised as part of the 2023 Conference on Machine Translation (WMT). In the general MT task, participants were asked to build machine translation systems for any of 8 language pairs (corresponding to 14 translation directions), to be evaluated on test sets consisting of up to four different domains. We evaluate system outputs with professional human annotators using a combination of source-based Direct Assessment and scalar quality metric (DA+SQM).
[ "Kocmi, Tom", "Avramidis, Eleftherios", "Bawden, Rachel", "Bojar, Ond{\\v{r}}ej", "Dvorkovich, Anton", "Federmann, Christian", "Fishel, Mark", "Freitag, Markus", "Gowda, Thamme", "Grundkiewicz, Roman", "Haddow, Barry", "Koehn, Philipp", "Marie, Benjamin", "Monz, Christof", "Morishita, Makoto", "Murray, Kenton", "Nagata, Makoto", "Nakazawa, Toshiaki", "Popel, Martin", "Popovi{\\'c}, Maja", "Shmatova, Mariya" ]
Findings of the 2023 Conference on Machine Translation (WMT23): LLMs Are Here but Not Quite There Yet
wmt-1.1
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.2.bib
https://aclanthology.org/2023.wmt-1.2/
@inproceedings{neves-etal-2023-findings, title = "Findings of the {WMT} 2023 Biomedical Translation Shared Task: Evaluation of {C}hat{GPT} 3.5 as a Comparison System", author = "Neves, Mariana and Jimeno Yepes, Antonio and N{\'e}v{\'e}ol, Aur{\'e}lie and Bawden, Rachel and Di Nunzio, Giorgio Maria and Roller, Roland and Thomas, Philippe and Vezzani, Federica and Vicente Navarro, Maika and Yeganova, Lana and Wiemann, Dina and Grozea, Cristian", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.2", doi = "10.18653/v1/2023.wmt-1.2", pages = "43--54", abstract = "We present an overview of the Biomedical Translation Task that was part of the Eighth Conference on Machine Translation (WMT23). The aim of the task was the automatic translation of biomedical abstracts from the PubMed database. It included twelve language directions, namely, French, Spanish, Portuguese, Italian, German, and Russian, from and into English. We received submissions from 18 systems and for all the test sets that we released. Our comparison system was based on ChatGPT 3.5 and performed very well in comparison to many of the submissions.", }
We present an overview of the Biomedical Translation Task that was part of the Eighth Conference on Machine Translation (WMT23). The aim of the task was the automatic translation of biomedical abstracts from the PubMed database. It included twelve language directions, namely, French, Spanish, Portuguese, Italian, German, and Russian, from and into English. We received submissions from 18 systems and for all the test sets that we released. Our comparison system was based on ChatGPT 3.5 and performed very well in comparison to many of the submissions.
[ "Neves, Mariana", "Jimeno Yepes, Antonio", "N{\\'e}v{\\'e}ol, Aur{\\'e}lie", "Bawden, Rachel", "Di Nunzio, Giorgio Maria", "Roller, Rol", "", "Thomas, Philippe", "Vezzani, Federica", "Vicente Navarro, Maika", "Yeganova, Lana", "Wiemann, Dina", "Grozea, Cristian" ]
Findings of the WMT 2023 Biomedical Translation Shared Task: Evaluation of ChatGPT 3.5 as a Comparison System
wmt-1.2
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.3.bib
https://aclanthology.org/2023.wmt-1.3/
@inproceedings{wang-etal-2023-findings, title = "Findings of the {WMT} 2023 Shared Task on Discourse-Level Literary Translation: A Fresh Orb in the Cosmos of {LLM}s", author = "Wang, Longyue and Tu, Zhaopeng and Gu, Yan and Liu, Siyou and Yu, Dian and Ma, Qingsong and Lyu, Chenyang and Zhou, Liting and Liu, Chao-Hong and Ma, Yufeng and Chen, Weiyu and Graham, Yvette and Webber, Bonnie and Koehn, Philipp and Way, Andy and Yuan, Yulin and Shi, Shuming", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.3", doi = "10.18653/v1/2023.wmt-1.3", pages = "55--67", abstract = "Translating literary works has perennially stood as an elusive dream in machine translation (MT), a journey steeped in intricate challenges. To foster progress in this domain, we hold a new shared task at WMT 2023, the first edition of the Discourse-Level Literary Translation. First, we (Tencent AI Lab and China Literature Ltd.) release a copyrighted and document-level Chinese-English web novel corpus. Furthermore, we put forth an industry-endorsed criteria to guide human evaluation process. This year, we totally received 14 submissions from 7 academia and industry teams. We employ both automatic and human evaluations to measure the performance of the submitted systems. The official ranking of the systems is based on the overall human judgments. In addition, our extensive analysis reveals a series of interesting findings on literary and discourse-aware MT. We release data, system outputs, and leaderboard at http://www2.statmt.org/wmt23/literary-translation-task.html.", }
Translating literary works has perennially stood as an elusive dream in machine translation (MT), a journey steeped in intricate challenges. To foster progress in this domain, we hold a new shared task at WMT 2023, the first edition of the Discourse-Level Literary Translation. First, we (Tencent AI Lab and China Literature Ltd.) release a copyrighted and document-level Chinese-English web novel corpus. Furthermore, we put forth an industry-endorsed criteria to guide human evaluation process. This year, we totally received 14 submissions from 7 academia and industry teams. We employ both automatic and human evaluations to measure the performance of the submitted systems. The official ranking of the systems is based on the overall human judgments. In addition, our extensive analysis reveals a series of interesting findings on literary and discourse-aware MT. We release data, system outputs, and leaderboard at http://www2.statmt.org/wmt23/literary-translation-task.html.
[ "Wang, Longyue", "Tu, Zhaopeng", "Gu, Yan", "Liu, Siyou", "Yu, Dian", "Ma, Qingsong", "Lyu, Chenyang", "Zhou, Liting", "Liu, Chao-Hong", "Ma, Yufeng", "Chen, Weiyu", "Graham, Yvette", "Webber, Bonnie", "Koehn, Philipp", "Way, Andy", "Yuan, Yulin", "Shi, Shuming" ]
Findings of the WMT 2023 Shared Task on Discourse-Level Literary Translation: A Fresh Orb in the Cosmos of LLMs
wmt-1.3
2311.03127
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.4.bib
https://aclanthology.org/2023.wmt-1.4/
@inproceedings{muller-etal-2023-findings, title = "Findings of the Second {WMT} Shared Task on Sign Language Translation ({WMT}-{SLT}23)", author = {M{\"u}ller, Mathias and Alikhani, Malihe and Avramidis, Eleftherios and Bowden, Richard and Braffort, Annelies and Cihan Camg{\"o}z, Necati and Ebling, Sarah and Espa{\~n}a-Bonet, Cristina and G{\"o}hring, Anne and Grundkiewicz, Roman and Inan, Mert and Jiang, Zifan and Koller, Oscar and Moryossef, Amit and Rios, Annette and Shterionov, Dimitar and Sidler-Miserez, Sandra and Tissi, Katja and Van Landuyt, Davy}, editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.4", doi = "10.18653/v1/2023.wmt-1.4", pages = "68--94", abstract = "This paper presents the results of the Second WMT Shared Task on Sign Language Translation (WMT-SLT23; https://www.wmt-slt.com/). This shared task is concerned with automatic translation between signed and spoken languages. The task is unusual in the sense that it requires processing visual information (such as video frames or human pose estimation) beyond the well-known paradigm of text-to-text machine translation (MT). The task offers four tracks involving the following languages: Swiss German Sign Language (DSGS), French Sign Language of Switzerland (LSF-CH), Italian Sign Language of Switzerland (LIS-CH), German, French and Italian. Four teams (including one working on a baseline submission) participated in this second edition of the task, all submitting to the DSGS-to-German track. Besides a system ranking and system papers describing state-of-the-art techniques, this shared task makes the following scientific contributions: novel corpora and reproducible baseline systems. Finally, the task also resulted in publicly available sets of system outputs and more human evaluation scores for sign language translation.", }
This paper presents the results of the Second WMT Shared Task on Sign Language Translation (WMT-SLT23; https://www.wmt-slt.com/). This shared task is concerned with automatic translation between signed and spoken languages. The task is unusual in the sense that it requires processing visual information (such as video frames or human pose estimation) beyond the well-known paradigm of text-to-text machine translation (MT). The task offers four tracks involving the following languages: Swiss German Sign Language (DSGS), French Sign Language of Switzerland (LSF-CH), Italian Sign Language of Switzerland (LIS-CH), German, French and Italian. Four teams (including one working on a baseline submission) participated in this second edition of the task, all submitting to the DSGS-to-German track. Besides a system ranking and system papers describing state-of-the-art techniques, this shared task makes the following scientific contributions: novel corpora and reproducible baseline systems. Finally, the task also resulted in publicly available sets of system outputs and more human evaluation scores for sign language translation.
[ "M{\\\"u}ller, Mathias", "Alikhani, Malihe", "Avramidis, Eleftherios", "Bowden, Richard", "Braffort, Annelies", "Cihan Camg{\\\"o}z, Necati", "Ebling, Sarah", "Espa{\\~n}a-Bonet, Cristina", "G{\\\"o}hring, Anne", "Grundkiewicz, Roman", "Inan, Mert", "Jiang, Zifan", "Koller, Oscar", "Moryossef, Amit", "Rios, Annette", "Shterionov, Dimitar", "Sidler-Miserez, S", "ra", "Tissi, Katja", "Van L", "uyt, Davy" ]
Findings of the Second WMT Shared Task on Sign Language Translation (WMT-SLT23)
wmt-1.4
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.5.bib
https://aclanthology.org/2023.wmt-1.5/
@inproceedings{sloto-etal-2023-findings, title = "Findings of the {WMT} 2023 Shared Task on Parallel Data Curation", author = "Sloto, Steve and Thompson, Brian and Khayrallah, Huda and Domhan, Tobias and Gowda, Thamme and Koehn, Philipp", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.5", doi = "10.18653/v1/2023.wmt-1.5", pages = "95--102", abstract = "Building upon prior WMT shared tasks in document alignment and sentence filtering, we posed the open-ended shared task of finding the best subset of possible training data from a collection of Estonian-Lithuanian web data. Participants could focus on any portion of the end-to-end data curation pipeline, including alignment and filtering. We evaluated results based on downstream machine translation quality. We release processed Common Crawl data, along with various intermediate states from a strong baseline system, which we believe will enable future research on this topic.", }
Building upon prior WMT shared tasks in document alignment and sentence filtering, we posed the open-ended shared task of finding the best subset of possible training data from a collection of Estonian-Lithuanian web data. Participants could focus on any portion of the end-to-end data curation pipeline, including alignment and filtering. We evaluated results based on downstream machine translation quality. We release processed Common Crawl data, along with various intermediate states from a strong baseline system, which we believe will enable future research on this topic.
[ "Sloto, Steve", "Thompson, Brian", "Khayrallah, Huda", "Domhan, Tobias", "Gowda, Thamme", "Koehn, Philipp" ]
Findings of the WMT 2023 Shared Task on Parallel Data Curation
wmt-1.5
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.6.bib
https://aclanthology.org/2023.wmt-1.6/
@inproceedings{cruz-2023-samsung, title = "{S}amsung {R}{\&}{D} Institute {P}hilippines at {WMT} 2023", author = "Cruz, Jan Christian Blaise", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.6", doi = "10.18653/v1/2023.wmt-1.6", pages = "103--109", abstract = "In this paper, we describe the constrained submission systems of Samsung R{\&}D Institute Philippines to the WMT 2023 General Translation Task for two directions: en-{\textgreater}he and he-{\textgreater}en. Our systems comprise of Transformer-based sequence-to-sequence models that are trained with a mix of best practices: comprehensive data preprocessing pipelines, synthetic backtranslated data, and the use of noisy channel reranking during online decoding. Our models perform comparably to, and sometimes outperform, strong baseline unconstrained systems such as mBART50 M2M and NLLB 200 MoE despite having significantly fewer parameters on two public benchmarks: FLORES-200 and NTREX-128.", }
In this paper, we describe the constrained submission systems of Samsung R{\&}D Institute Philippines to the WMT 2023 General Translation Task for two directions: en-{\textgreater}he and he-{\textgreater}en. Our systems comprise of Transformer-based sequence-to-sequence models that are trained with a mix of best practices: comprehensive data preprocessing pipelines, synthetic backtranslated data, and the use of noisy channel reranking during online decoding. Our models perform comparably to, and sometimes outperform, strong baseline unconstrained systems such as mBART50 M2M and NLLB 200 MoE despite having significantly fewer parameters on two public benchmarks: FLORES-200 and NTREX-128.
[ "Cruz, Jan Christian Blaise" ]
Samsung R&D Institute Philippines at WMT 2023
wmt-1.6
2310.16322
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.7.bib
https://aclanthology.org/2023.wmt-1.7/
@inproceedings{deguchi-etal-2023-naist, title = "{NAIST}-{NICT} {WMT}{'}23 General {MT} Task Submission", author = "Deguchi, Hiroyuki and Imamura, Kenji and Nishida, Yuto and Sakai, Yusuke and Vasselli, Justin and Watanabe, Taro", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.7", doi = "10.18653/v1/2023.wmt-1.7", pages = "110--118", abstract = "In this paper, we describe our NAIST-NICT submission to the WMT{'}23 English ↔ Japanese general machine translation task. Our system generates diverse translation candidates and reranks them using a two-stage reranking system to find the best translation. First, we generated 50 candidates each from 18 translation methods using a variety of techniques to increase the diversity of the translation candidates. We trained seven models per language direction using various combinations of hyperparameters. From these models we used various decoding algorithms, ensembling the models, and using kNN-MT (Khandelwal et al., 2021). We processed the 900 translation candidates through a two-stage reranking system to find the most promising candidate. In the first step, we compared 50 candidates from each translation method using DrNMT (Lee et al., 2021) and returned the candidate with the best score. We ranked the final 18 candidates using COMET-MBR (Fernandes et al., 2022) and returned the best score as the system output. We found that generating diverse translation candidates improved translation quality using the well-designed reranker model.", }
In this paper, we describe our NAIST-NICT submission to the WMT{'}23 English ↔ Japanese general machine translation task. Our system generates diverse translation candidates and reranks them using a two-stage reranking system to find the best translation. First, we generated 50 candidates each from 18 translation methods using a variety of techniques to increase the diversity of the translation candidates. We trained seven models per language direction using various combinations of hyperparameters. From these models we used various decoding algorithms, ensembling the models, and using kNN-MT (Khandelwal et al., 2021). We processed the 900 translation candidates through a two-stage reranking system to find the most promising candidate. In the first step, we compared 50 candidates from each translation method using DrNMT (Lee et al., 2021) and returned the candidate with the best score. We ranked the final 18 candidates using COMET-MBR (Fernandes et al., 2022) and returned the best score as the system output. We found that generating diverse translation candidates improved translation quality using the well-designed reranker model.
[ "Deguchi, Hiroyuki", "Imamura, Kenji", "Nishida, Yuto", "Sakai, Yusuke", "Vasselli, Justin", "Watanabe, Taro" ]
NAIST-NICT WMT'23 General MT Task Submission
wmt-1.7
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.8.bib
https://aclanthology.org/2023.wmt-1.8/
@inproceedings{jon-etal-2023-cuni, title = "{CUNI} at {WMT}23 General Translation Task: {MT} and a Genetic Algorithm", author = "Jon, Josef and Popel, Martin and Bojar, Ond{\v{r}}ej", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.8", doi = "10.18653/v1/2023.wmt-1.8", pages = "119--127", abstract = "This paper presents the contributions of Charles University teams to the WMT23 General translation task (English to Czech and Czech to Ukrainian translation directions). Our main submission, CUNI-GA, is a result of applying a novel n-best list reranking and modification method on translation candidates produced by the two other submitted systems, CUNI-Transformer and CUNI-DocTransformer (document-level translation only used for the $en \rightarrow cs$ direction). Our method uses a genetic algorithm and MBR decoding to search for optimal translation under a given metric (in our case, a weighted combination of ChrF, BLEU, COMET22-DA, and COMET22-QE-DA). Our submissions are first in the constrained track and show competitive performance against top-tier unconstrained systems across various automatic metrics.", }
This paper presents the contributions of Charles University teams to the WMT23 General translation task (English to Czech and Czech to Ukrainian translation directions). Our main submission, CUNI-GA, is a result of applying a novel n-best list reranking and modification method on translation candidates produced by the two other submitted systems, CUNI-Transformer and CUNI-DocTransformer (document-level translation only used for the $en \rightarrow cs$ direction). Our method uses a genetic algorithm and MBR decoding to search for optimal translation under a given metric (in our case, a weighted combination of ChrF, BLEU, COMET22-DA, and COMET22-QE-DA). Our submissions are first in the constrained track and show competitive performance against top-tier unconstrained systems across various automatic metrics.
[ "Jon, Josef", "Popel, Martin", "Bojar, Ond{\\v{r}}ej" ]
CUNI at WMT23 General Translation Task: MT and a Genetic Algorithm
wmt-1.8
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.9.bib
https://aclanthology.org/2023.wmt-1.9/
@inproceedings{kudo-etal-2023-skim, title = "{SKIM} at {WMT} 2023 General Translation Task", author = "Kudo, Keito and Ito, Takumi and Morishita, Makoto and Suzuki, Jun", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.9", doi = "10.18653/v1/2023.wmt-1.9", pages = "128--136", abstract = "The SKIM team{'}s submission used a standard procedure to build ensemble Transformer models, including base-model training, back-translation of base models for data augmentation, and retraining of several final models using back-translated training data. Each final model had its own architecture and configuration, including up to 10.5B parameters, and substituted self- and cross-sublayers in the decoder with a cross+self-attention sub-layer. We selected the best candidate from a large candidate pool, namely 70 translations generated from 13 distinct models for each sentence, using an MBR reranking method using COMET and COMET-QE. We also applied data augmentation and selection techniques to the training data of the Transformer models.", }
The SKIM team{'}s submission used a standard procedure to build ensemble Transformer models, including base-model training, back-translation of base models for data augmentation, and retraining of several final models using back-translated training data. Each final model had its own architecture and configuration, including up to 10.5B parameters, and substituted self- and cross-sublayers in the decoder with a cross+self-attention sub-layer. We selected the best candidate from a large candidate pool, namely 70 translations generated from 13 distinct models for each sentence, using an MBR reranking method using COMET and COMET-QE. We also applied data augmentation and selection techniques to the training data of the Transformer models.
[ "Kudo, Keito", "Ito, Takumi", "Morishita, Makoto", "Suzuki, Jun" ]
SKIM at WMT 2023 General Translation Task
wmt-1.9
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.10.bib
https://aclanthology.org/2023.wmt-1.10/
@inproceedings{li-etal-2023-kyb, title = "{KYB} General Machine Translation Systems for {WMT}23", author = "Li, Ben and Matsuzaki, Yoko and Kalkar, Shivam", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.10", doi = "10.18653/v1/2023.wmt-1.10", pages = "137--142", abstract = "This paper describes our approach to constructing a neural machine translation system for the WMT 2023 general machine translation shared task. Our model is based on the Transformer architecture{'}s base settings. We optimize system performance through various strategies. Enhancing our model{'}s capabilities involves fine-tuning the pretrained model with an extended dataset. To further elevate translation quality, specialized pre- and post-processing techniques are deployed. Our central focus is on efficient model training, aiming for exceptional accuracy through the synergy of a compact model and curated data. We also performed ensembling augmented by N-best ranking, for both directions of English to Japanese and Japanese to English translation.", }
This paper describes our approach to constructing a neural machine translation system for the WMT 2023 general machine translation shared task. Our model is based on the Transformer architecture{'}s base settings. We optimize system performance through various strategies. Enhancing our model{'}s capabilities involves fine-tuning the pretrained model with an extended dataset. To further elevate translation quality, specialized pre- and post-processing techniques are deployed. Our central focus is on efficient model training, aiming for exceptional accuracy through the synergy of a compact model and curated data. We also performed ensembling augmented by N-best ranking, for both directions of English to Japanese and Japanese to English translation.
[ "Li, Ben", "Matsuzaki, Yoko", "Kalkar, Shivam" ]
KYB General Machine Translation Systems for WMT23
wmt-1.10
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.11.bib
https://aclanthology.org/2023.wmt-1.11/
@inproceedings{min-etal-2023-yishu, title = "Yishu: Yishu at {WMT}2023 Translation Task", author = "Min, Luo and Tan, Yixin and Chen, Qiulin", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.11", doi = "10.18653/v1/2023.wmt-1.11", pages = "143--149", abstract = "This paper introduces the Dtranx AI translation system, developed for the WMT 2023 Universal Translation Shared Task. Our team participated in two language directions: English to Chinese and Chinese to English. Our primary focus was on enhancing the effectiveness of the Chinese-to-English model through the implementation of bilingual models. Our approach involved various techniques such as data corpus filtering, model size scaling, sparse expert models (especially the Transformer model with adapters), large-scale back-translation, and language model reordering. According to automatic evaluation, our system secured the first place in the English-to-Chinese category and the second place in the Chinese-to-English category.", }
This paper introduces the Dtranx AI translation system, developed for the WMT 2023 Universal Translation Shared Task. Our team participated in two language directions: English to Chinese and Chinese to English. Our primary focus was on enhancing the effectiveness of the Chinese-to-English model through the implementation of bilingual models. Our approach involved various techniques such as data corpus filtering, model size scaling, sparse expert models (especially the Transformer model with adapters), large-scale back-translation, and language model reordering. According to automatic evaluation, our system secured the first place in the English-to-Chinese category and the second place in the Chinese-to-English category.
[ "Min, Luo", "Tan, Yixin", "Chen, Qiulin" ]
Yishu: Yishu at WMT2023 Translation Task
wmt-1.11
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.12.bib
https://aclanthology.org/2023.wmt-1.12/
@inproceedings{molchanov-kovalenko-2023-promt, title = "{PROMT} Systems for {WMT}23 Shared General Translation Task", author = "Molchanov, Alexander and Kovalenko, Vladislav", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.12", doi = "10.18653/v1/2023.wmt-1.12", pages = "150--154", abstract = "This paper describes the PROMT submissions for the WMT23 Shared General Translation Task. This year we participated in two directions of the Shared Translation Task: English to Russian and Russian to English. Our models are trained with the MarianNMT toolkit using the transformer-big configuration. We use BPE for text encoding, both models are unconstrained. We achieve competitive results according to automatic metrics in both directions.", }
This paper describes the PROMT submissions for the WMT23 Shared General Translation Task. This year we participated in two directions of the Shared Translation Task: English to Russian and Russian to English. Our models are trained with the MarianNMT toolkit using the transformer-big configuration. We use BPE for text encoding, both models are unconstrained. We achieve competitive results according to automatic metrics in both directions.
[ "Molchanov, Alex", "er", "Kovalenko, Vladislav" ]
PROMT Systems for WMT23 Shared General Translation Task
wmt-1.12
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.13.bib
https://aclanthology.org/2023.wmt-1.13/
@inproceedings{rikters-miwa-2023-aist, title = "{AIST} {AIRC} Submissions to the {WMT}23 Shared Task", author = "Rikters, Matiss and Miwa, Makoto", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.13", doi = "10.18653/v1/2023.wmt-1.13", pages = "155--161", abstract = "This paper describes the development process of NMT systems that were submitted to the WMT 2023 General Translation task by the team of AIST AIRC. We trained constrained track models for translation between English, German, and Japanese. Before training the final models, we first filtered the parallel and monolingual data, then performed iterative back-translation as well as parallel data distillation to be used for non-autoregressive model training. We experimented with training Transformer models, Mega models, and custom non-autoregressive sequence-to-sequence models with encoder and decoder weights initialised by a multilingual BERT base. Our primary submissions contain translations from ensembles of two Mega model checkpoints and our contrastive submissions are generated by our non-autoregressive models.", }
This paper describes the development process of NMT systems that were submitted to the WMT 2023 General Translation task by the team of AIST AIRC. We trained constrained track models for translation between English, German, and Japanese. Before training the final models, we first filtered the parallel and monolingual data, then performed iterative back-translation as well as parallel data distillation to be used for non-autoregressive model training. We experimented with training Transformer models, Mega models, and custom non-autoregressive sequence-to-sequence models with encoder and decoder weights initialised by a multilingual BERT base. Our primary submissions contain translations from ensembles of two Mega model checkpoints and our contrastive submissions are generated by our non-autoregressive models.
[ "Rikters, Matiss", "Miwa, Makoto" ]
AIST AIRC Submissions to the WMT23 Shared Task
wmt-1.13
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.14.bib
https://aclanthology.org/2023.wmt-1.14/
@inproceedings{rychly-teslia-2023-muni, title = "{MUNI}-{NLP} Submission for {C}zech-{U}krainian Translation Task at {WMT}23", author = "Rychly, Pavel and Teslia, Yuliia", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.14", doi = "10.18653/v1/2023.wmt-1.14", pages = "162--165", abstract = "The system is trained on officialy provided data only. We have heavily filtered all the data to remove machine translated text, Russian text and other noise. We use the DeepNorm modification of the transformer architecture in the TorchScale library with 18 encoder layers and 6 decoder layers. The initial systems for backtranslation uses HFT tokenizer, the final system uses custom tokenizer derived from HFT.", }
The system is trained on officialy provided data only. We have heavily filtered all the data to remove machine translated text, Russian text and other noise. We use the DeepNorm modification of the transformer architecture in the TorchScale library with 18 encoder layers and 6 decoder layers. The initial systems for backtranslation uses HFT tokenizer, the final system uses custom tokenizer derived from HFT.
[ "Rychly, Pavel", "Teslia, Yuliia" ]
MUNI-NLP Submission for Czech-Ukrainian Translation Task at WMT23
wmt-1.14
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.15.bib
https://aclanthology.org/2023.wmt-1.15/
@inproceedings{wu-hu-2023-exploring, title = "Exploring Prompt Engineering with {GPT} Language Models for Document-Level Machine Translation: Insights and Findings", author = "Wu, Yangjian and Hu, Gang", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.15", doi = "10.18653/v1/2023.wmt-1.15", pages = "166--169", abstract = "This paper describes Lan-Bridge Translation systems for the WMT 2023 General Translation shared task. We participate in 2 directions: English to and from Chinese. With the emergence of large-scale models, various industries have undergone significant transformations, particularly in the realm of document-level machine translation. This has introduced a novel research paradigm that we have embraced in our participation in the WMT23 competition. Focusing on advancements in models such as GPT-3.5 and GPT-4, we have undertaken numerous prompt-based experiments. Our objective is to achieve optimal human evaluation results for document-level machine translation, resulting in our submission of the final outcomes in the general track.", }
This paper describes Lan-Bridge Translation systems for the WMT 2023 General Translation shared task. We participate in 2 directions: English to and from Chinese. With the emergence of large-scale models, various industries have undergone significant transformations, particularly in the realm of document-level machine translation. This has introduced a novel research paradigm that we have embraced in our participation in the WMT23 competition. Focusing on advancements in models such as GPT-3.5 and GPT-4, we have undertaken numerous prompt-based experiments. Our objective is to achieve optimal human evaluation results for document-level machine translation, resulting in our submission of the final outcomes in the general track.
[ "Wu, Yangjian", "Hu, Gang" ]
Exploring Prompt Engineering with GPT Language Models for Document-Level Machine Translation: Insights and Findings
wmt-1.15
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.16.bib
https://aclanthology.org/2023.wmt-1.16/
@inproceedings{wu-etal-2023-treating, title = "Treating General {MT} Shared Task as a Multi-Domain Adaptation Problem: {HW}-{TSC}{'}s Submission to the {WMT}23 General {MT} Shared Task", author = "Wu, Zhanglin and Wei, Daimeng and Li, Zongyao and Yu, Zhengzhe and Li, Shaojun and Chen, Xiaoyu and Shang, Hengchao and Guo, Jiaxin and Xie, Yuhao and Lei, Lizhi and Yang, Hao and Jiang, Yanfei", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.16", doi = "10.18653/v1/2023.wmt-1.16", pages = "170--174", abstract = "This paper presents the submission of Huawei Translate Services Center (HW-TSC) to the WMT23 general machine translation (MT) shared task, in which we participate in Chinese↔English (zh↔en) language pair. We use Transformer architecture and obtain the best performance via a variant with larger parameter size. We perform fine-grained pre-processing and filtering on the provided large-scale bilingual and monolingual datasets. We mainly use model enhancement strategies, including Regularized Dropout, Bidirectional Training, Data Diversification, Forward Translation, Back Translation, Alternated Training, Curriculum Learning and Transductive Ensemble Learning. Our submissions obtain competitive results in the final evaluation.", }
This paper presents the submission of Huawei Translate Services Center (HW-TSC) to the WMT23 general machine translation (MT) shared task, in which we participate in Chinese↔English (zh↔en) language pair. We use Transformer architecture and obtain the best performance via a variant with larger parameter size. We perform fine-grained pre-processing and filtering on the provided large-scale bilingual and monolingual datasets. We mainly use model enhancement strategies, including Regularized Dropout, Bidirectional Training, Data Diversification, Forward Translation, Back Translation, Alternated Training, Curriculum Learning and Transductive Ensemble Learning. Our submissions obtain competitive results in the final evaluation.
[ "Wu, Zhanglin", "Wei, Daimeng", "Li, Zongyao", "Yu, Zhengzhe", "Li, Shaojun", "Chen, Xiaoyu", "Shang, Hengchao", "Guo, Jiaxin", "Xie, Yuhao", "Lei, Lizhi", "Yang, Hao", "Jiang, Yanfei" ]
Treating General MT Shared Task as a Multi-Domain Adaptation Problem: HW-TSC's Submission to the WMT23 General MT Shared Task
wmt-1.16
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.17.bib
https://aclanthology.org/2023.wmt-1.17/
@inproceedings{wu-etal-2023-uva, title = "{U}v{A}-{MT}{'}s Participation in the {WMT} 2023 General Translation Shared Task", author = "Wu, Di and Tan, Shaomu and Stap, David and Araabi, Ali and Monz, Christof", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.17", doi = "10.18653/v1/2023.wmt-1.17", pages = "175--180", abstract = "This paper describes the UvA-MT{'}s submission to the WMT 2023 shared task on general machine translation. We participate in the constrained track in two directions: English $\leftrightarrow$ Hebrew. In this competition, we show that by using one model to handle bidirectional tasks, as a minimal setting of Multilingual Machine Translation (MMT), it is possible to achieve comparable results with that of traditional bilingual translation for both directions. By including effective strategies, like back-translation, re-parameterized embedding table, and task-oriented fine-tuning, we obtained competitive final results in the automatic evaluation for both English $\rightarrow$ Hebrew and Hebrew $\rightarrow$ English directions.", }
This paper describes the UvA-MT{'}s submission to the WMT 2023 shared task on general machine translation. We participate in the constrained track in two directions: English $\leftrightarrow$ Hebrew. In this competition, we show that by using one model to handle bidirectional tasks, as a minimal setting of Multilingual Machine Translation (MMT), it is possible to achieve comparable results with that of traditional bilingual translation for both directions. By including effective strategies, like back-translation, re-parameterized embedding table, and task-oriented fine-tuning, we obtained competitive final results in the automatic evaluation for both English $\rightarrow$ Hebrew and Hebrew $\rightarrow$ English directions.
[ "Wu, Di", "Tan, Shaomu", "Stap, David", "Araabi, Ali", "Monz, Christof" ]
UvA-MT's Participation in the WMT 2023 General Translation Shared Task
wmt-1.17
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.18.bib
https://aclanthology.org/2023.wmt-1.18/
@inproceedings{zeng-2023-achieving, title = "Achieving State-of-the-Art Multilingual Translation Model with Minimal Data and Parameters", author = "Zeng, Hui", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.18", doi = "10.18653/v1/2023.wmt-1.18", pages = "181--186", abstract = "This is LanguageX (ZengHuiMT){'}s submission to WMT 2023 General Machine Translation task for 13 language directions. We initially employ an encoder-decoder model to train on all 13 competition translation directions as our baseline system. Subsequently, we adopt a decoder-only architecture and fine-tune a multilingual language model by partially sampling data from diverse multilingual datasets such as CC100 and WuDaoCorpora. This is further refined using carefully curated high-quality parallel corpora across multiple translation directions to enable the model to perform translation tasks. As per automated evaluation metrics, our model ranks first in the translation directions from English to Russian, English to German, and English to Ukrainian. It secures the second position in the directions from English to Czech, English to Hebrew, Hebrew to English, and Ukrainian to English, and ranks third in German to English, Japanese to English, and Russian to English among all participating teams. Our best-performing model, covering 13 translation directions, stands on par with GPT-4. Among all 13 translation directions, our multilingual model surpasses GPT-4 in bleu scores for 7 translation directions.", }
This is LanguageX (ZengHuiMT){'}s submission to WMT 2023 General Machine Translation task for 13 language directions. We initially employ an encoder-decoder model to train on all 13 competition translation directions as our baseline system. Subsequently, we adopt a decoder-only architecture and fine-tune a multilingual language model by partially sampling data from diverse multilingual datasets such as CC100 and WuDaoCorpora. This is further refined using carefully curated high-quality parallel corpora across multiple translation directions to enable the model to perform translation tasks. As per automated evaluation metrics, our model ranks first in the translation directions from English to Russian, English to German, and English to Ukrainian. It secures the second position in the directions from English to Czech, English to Hebrew, Hebrew to English, and Ukrainian to English, and ranks third in German to English, Japanese to English, and Russian to English among all participating teams. Our best-performing model, covering 13 translation directions, stands on par with GPT-4. Among all 13 translation directions, our multilingual model surpasses GPT-4 in bleu scores for 7 translation directions.
[ "Zeng, Hui" ]
Achieving State-of-the-Art Multilingual Translation Model with Minimal Data and Parameters
wmt-1.18
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.19.bib
https://aclanthology.org/2023.wmt-1.19/
@inproceedings{zhang-2023-iol, title = "{IOL} Research Machine Translation Systems for {WMT}23 General Machine Translation Shared Task", author = "Zhang, Wenbo", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.19", doi = "10.18653/v1/2023.wmt-1.19", pages = "187--191", abstract = "This paper describes the IOL Research team{'}s submission systems for the WMT23 general machine translation shared task. We participated in two language translation directions, including English-to-Chinese and Chinese-to-English. Our final primary submissions belong to constrained systems, which means for both translation directions we only use officially provided monolingual and bilingual data to train the translation systems. Our systems are based on Transformer architecture with pre-norm or deep-norm, which has been proven to be helpful for training deeper models. We employ methods such as back-translation, data diversification, domain fine-tuning and model ensemble to build our translation systems. An important aspect worth mentioning is our careful data cleaning process and the utilization of a substantial amount of monolingual data for data augmentation. Compared with the baseline system, our submissions have a large improvement in BLEU score.", }
This paper describes the IOL Research team{'}s submission systems for the WMT23 general machine translation shared task. We participated in two language translation directions, including English-to-Chinese and Chinese-to-English. Our final primary submissions belong to constrained systems, which means for both translation directions we only use officially provided monolingual and bilingual data to train the translation systems. Our systems are based on Transformer architecture with pre-norm or deep-norm, which has been proven to be helpful for training deeper models. We employ methods such as back-translation, data diversification, domain fine-tuning and model ensemble to build our translation systems. An important aspect worth mentioning is our careful data cleaning process and the utilization of a substantial amount of monolingual data for data augmentation. Compared with the baseline system, our submissions have a large improvement in BLEU score.
[ "Zhang, Wenbo" ]
IOL Research Machine Translation Systems for WMT23 General Machine Translation Shared Task
wmt-1.19
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.20.bib
https://aclanthology.org/2023.wmt-1.20/
@inproceedings{zong-2023-gtcom, title = "{GTCOM} and {DLUT}{'}s Neural Machine Translation Systems for {WMT}23", author = "Zong, Hao", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.20", doi = "10.18653/v1/2023.wmt-1.20", pages = "192--197", abstract = "This paper presents the submission by Global Tone Communication Co., Ltd. and Dalian Univeristy of Technology for the WMT23 shared general Machine Translation (MT) task at the Conference on Empirical Methods in Natural Language Processing (EMNLP). Our participation spans 8 language pairs, including English-Ukrainian, Ukrainian-English, Czech-Ukrainian, English-Hebrew, Hebrew-English, English-Czech, German-English, and Japanese-English. Our systems are designed without any specific constraints or requirements, allowing us to explore a wider range of possibilities in machine translation. We prioritize backtranslation, utilize multilingual translation models, and employ fine-tuning strategies to enhance performance. Additionally, we propose a novel data generation method that leverages human annotation to generate high-quality training data, resulting in improved system performance. Specifically, we use a combination of human-generated and machine-generated data to fine-tune our models, leading to more accurate translations. The automatic evaluation results show that our system ranks first in terms of BLEU score in Ukrainian-English, Hebrew-English, English-Hebrew, and German-English.", }
This paper presents the submission by Global Tone Communication Co., Ltd. and Dalian Univeristy of Technology for the WMT23 shared general Machine Translation (MT) task at the Conference on Empirical Methods in Natural Language Processing (EMNLP). Our participation spans 8 language pairs, including English-Ukrainian, Ukrainian-English, Czech-Ukrainian, English-Hebrew, Hebrew-English, English-Czech, German-English, and Japanese-English. Our systems are designed without any specific constraints or requirements, allowing us to explore a wider range of possibilities in machine translation. We prioritize backtranslation, utilize multilingual translation models, and employ fine-tuning strategies to enhance performance. Additionally, we propose a novel data generation method that leverages human annotation to generate high-quality training data, resulting in improved system performance. Specifically, we use a combination of human-generated and machine-generated data to fine-tune our models, leading to more accurate translations. The automatic evaluation results show that our system ranks first in terms of BLEU score in Ukrainian-English, Hebrew-English, English-Hebrew, and German-English.
[ "Zong, Hao" ]
GTCOM and DLUT's Neural Machine Translation Systems for WMT23
wmt-1.20
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.21.bib
https://aclanthology.org/2023.wmt-1.21/
@inproceedings{bawden-sagot-2023-rocs, title = "{R}o{CS}-{MT}: Robustness Challenge Set for Machine Translation", author = "Bawden, Rachel and Sagot, Beno{\^\i}t", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.21", doi = "10.18653/v1/2023.wmt-1.21", pages = "198--216", abstract = "RoCS-MT, a Robust Challenge Set for Machine Translation (MT), is designed to test MT systems{'} ability to translate user-generated content (UGC) that displays non-standard characteristics, such as spelling errors, devowelling, acronymisation, etc. RoCS-MT is composed of English comments from Reddit, selected for their non-standard nature, which have been manually normalised and professionally translated into five languages: French, German, Czech, Ukrainian and Russian. In the context of the WMT23 test suite shared task, we analyse the models submitted to the general MT task for all from-English language pairs, offering some insights into the types of problems faced by state-of-the-art MT models when dealing with non-standard UGC texts. We compare automatic metrics for MT quality, including quality estimation to see if the same conclusions can be drawn without references. In terms of robustness, we find that many of the systems struggle with non-standard variants of words (e.g. due to phonetically inspired spellings, contraction, truncations, etc.), but that this depends on the system and the amount of training data, with the best overall systems performing better across all phenomena. GPT4 is the clear front-runner. However we caution against drawing conclusions about generalisation capacity as it and other systems could be trained on the source side of RoCS and also on similar data.", }
RoCS-MT, a Robust Challenge Set for Machine Translation (MT), is designed to test MT systems{'} ability to translate user-generated content (UGC) that displays non-standard characteristics, such as spelling errors, devowelling, acronymisation, etc. RoCS-MT is composed of English comments from Reddit, selected for their non-standard nature, which have been manually normalised and professionally translated into five languages: French, German, Czech, Ukrainian and Russian. In the context of the WMT23 test suite shared task, we analyse the models submitted to the general MT task for all from-English language pairs, offering some insights into the types of problems faced by state-of-the-art MT models when dealing with non-standard UGC texts. We compare automatic metrics for MT quality, including quality estimation to see if the same conclusions can be drawn without references. In terms of robustness, we find that many of the systems struggle with non-standard variants of words (e.g. due to phonetically inspired spellings, contraction, truncations, etc.), but that this depends on the system and the amount of training data, with the best overall systems performing better across all phenomena. GPT4 is the clear front-runner. However we caution against drawing conclusions about generalisation capacity as it and other systems could be trained on the source side of RoCS and also on similar data.
[ "Bawden, Rachel", "Sagot, Beno{\\^\\i}t" ]
RoCS-MT: Robustness Challenge Set for Machine Translation
wmt-1.21
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.22.bib
https://aclanthology.org/2023.wmt-1.22/
@inproceedings{chen-etal-2023-multifaceted, title = "Multifaceted Challenge Set for Evaluating Machine Translation Performance", author = "Chen, Xiaoyu and Wei, Daimeng and Wu, Zhanglin and Zhu, Ting and Shang, Hengchao and Li, Zongyao and Guo, Jiaxin and Xie, Ning and Lei, Lizhi and Yang, Hao and Jiang, Yanfei", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.22", doi = "10.18653/v1/2023.wmt-1.22", pages = "217--223", abstract = "Machine Translation Evaluation is critical to Machine Translation research, as the evaluation results reflect the effectiveness of training strategies. As a result, a fair and efficient evaluation method is necessary. Many researchers have raised questions about currently available evaluation metrics from various perspectives, and propose suggestions accordingly. However, to our knowledge, few researchers has analyzed the difficulty level of source sentence and its influence on evaluation results. This paper presents HW-TSC{'}s submission to the WMT23 MT Test Suites shared task. We propose a systematic approach for construing challenge sets from four aspects: word difficulty, length difficulty, grammar difficulty and model learning difficulty. We open-source two Multifaceted Challenge Sets for Zh→En and En→Zh. We also present results of participants in this year{'}s General MT shared task on our test sets.", }
Machine Translation Evaluation is critical to Machine Translation research, as the evaluation results reflect the effectiveness of training strategies. As a result, a fair and efficient evaluation method is necessary. Many researchers have raised questions about currently available evaluation metrics from various perspectives, and propose suggestions accordingly. However, to our knowledge, few researchers has analyzed the difficulty level of source sentence and its influence on evaluation results. This paper presents HW-TSC{'}s submission to the WMT23 MT Test Suites shared task. We propose a systematic approach for construing challenge sets from four aspects: word difficulty, length difficulty, grammar difficulty and model learning difficulty. We open-source two Multifaceted Challenge Sets for Zh→En and En→Zh. We also present results of participants in this year{'}s General MT shared task on our test sets.
[ "Chen, Xiaoyu", "Wei, Daimeng", "Wu, Zhanglin", "Zhu, Ting", "Shang, Hengchao", "Li, Zongyao", "Guo, Jiaxin", "Xie, Ning", "Lei, Lizhi", "Yang, Hao", "Jiang, Yanfei" ]
Multifaceted Challenge Set for Evaluating Machine Translation Performance
wmt-1.22
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.23.bib
https://aclanthology.org/2023.wmt-1.23/
@inproceedings{manakhimova-etal-2023-linguistically, title = "Linguistically Motivated Evaluation of the 2023 State-of-the-art Machine Translation: Can {C}hat{GPT} Outperform {NMT}?", author = {Manakhimova, Shushen and Avramidis, Eleftherios and Macketanz, Vivien and Lapshinova-Koltunski, Ekaterina and Bagdasarov, Sergei and M{\"o}ller, Sebastian}, editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.23", doi = "10.18653/v1/2023.wmt-1.23", pages = "224--245", abstract = "This paper offers a fine-grained analysis of the machine translation outputs in the context of the Shared Task at the 8th Conference of Machine Translation (WMT23). Building on the foundation of previous test suite efforts, our analysis includes Large Language Models and an updated test set featuring new linguistic phenomena. To our knowledge, this is the first fine-grained linguistic analysis for the GPT-4 translation outputs. Our evaluation spans German-English, English-German, and English-Russian language directions. Some of the phenomena with the lowest accuracies for German-English are idioms and resultative predicates. For English-German, these include mediopassive voice, and noun formation(er). As for English-Russian, these included idioms and semantic roles. GPT-4 performs equally or comparably to the best systems in German-English and English-German but falls in the second significance cluster for English-Russian.", }
This paper offers a fine-grained analysis of the machine translation outputs in the context of the Shared Task at the 8th Conference of Machine Translation (WMT23). Building on the foundation of previous test suite efforts, our analysis includes Large Language Models and an updated test set featuring new linguistic phenomena. To our knowledge, this is the first fine-grained linguistic analysis for the GPT-4 translation outputs. Our evaluation spans German-English, English-German, and English-Russian language directions. Some of the phenomena with the lowest accuracies for German-English are idioms and resultative predicates. For English-German, these include mediopassive voice, and noun formation(er). As for English-Russian, these included idioms and semantic roles. GPT-4 performs equally or comparably to the best systems in German-English and English-German but falls in the second significance cluster for English-Russian.
[ "Manakhimova, Shushen", "Avramidis, Eleftherios", "Macketanz, Vivien", "Lapshinova-Koltunski, Ekaterina", "Bagdasarov, Sergei", "M{\\\"o}ller, Sebastian" ]
Linguistically Motivated Evaluation of the 2023 State-of-the-art Machine Translation: Can ChatGPT Outperform NMT?
wmt-1.23
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.24.bib
https://aclanthology.org/2023.wmt-1.24/
@inproceedings{mukherjee-shrivastava-2023-iiit, title = "{IIIT} {HYD}{'}s Submission for {WMT}23 Test-suite Task", author = "Mukherjee, Ananya and Shrivastava, Manish", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.24", doi = "10.18653/v1/2023.wmt-1.24", pages = "246--251", abstract = "This paper summarizes the results of our test suite evaluation on 12 machine translation systems submitted at the Shared Task of the 8th Conference of Machine Translation (WMT23) for English-German (en-de) language pair. Our test suite covers five specific domains (entertainment, environment, health, science, legal) and spans five distinct writing styles (descriptive, judgments, narrative, reporting, technical-writing). We present our analysis through automatic evaluation methods, conducted with a focus on domain-specific and writing style-specific evaluations.", }
This paper summarizes the results of our test suite evaluation on 12 machine translation systems submitted at the Shared Task of the 8th Conference of Machine Translation (WMT23) for English-German (en-de) language pair. Our test suite covers five specific domains (entertainment, environment, health, science, legal) and spans five distinct writing styles (descriptive, judgments, narrative, reporting, technical-writing). We present our analysis through automatic evaluation methods, conducted with a focus on domain-specific and writing style-specific evaluations.
[ "Mukherjee, Ananya", "Shrivastava, Manish" ]
IIIT HYD's Submission for WMT23 Test-suite Task
wmt-1.24
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.25.bib
https://aclanthology.org/2023.wmt-1.25/
@inproceedings{savoldi-etal-2023-test, title = "Test Suites Task: Evaluation of Gender Fairness in {MT} with {M}u{ST}-{SHE} and {INES}", author = "Savoldi, Beatrice and Gaido, Marco and Negri, Matteo and Bentivogli, Luisa", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.25", doi = "10.18653/v1/2023.wmt-1.25", pages = "252--262", abstract = "As part of the WMT-2023 {``}Test suites{''} shared task, in this paper we summarize the results of two test suites evaluations: MuST-SHEWMT23 and INES. By focusing on the en-de and de-en language pairs, we rely on these newly created test suites to investigate systems{'} ability to translate feminine and masculine gender and produce gender-inclusive translations. Furthermore we discuss metrics associated with our test suites and validate them by means of human evaluations. Our results indicate that systems achieve reasonable and comparable performance in correctly translating both feminine and masculine gender forms for naturalistic gender phenomena. Instead, the generation of inclusive language forms in translation emerges as a challenging task for all the evaluated MT models, indicating room for future improvements and research on the topic. We make MuST-SHEWMT23 and INES freely available.", }
As part of the WMT-2023 {``}Test suites{''} shared task, in this paper we summarize the results of two test suites evaluations: MuST-SHEWMT23 and INES. By focusing on the en-de and de-en language pairs, we rely on these newly created test suites to investigate systems{'} ability to translate feminine and masculine gender and produce gender-inclusive translations. Furthermore we discuss metrics associated with our test suites and validate them by means of human evaluations. Our results indicate that systems achieve reasonable and comparable performance in correctly translating both feminine and masculine gender forms for naturalistic gender phenomena. Instead, the generation of inclusive language forms in translation emerges as a challenging task for all the evaluated MT models, indicating room for future improvements and research on the topic. We make MuST-SHEWMT23 and INES freely available.
[ "Savoldi, Beatrice", "Gaido, Marco", "Negri, Matteo", "Bentivogli, Luisa" ]
Test Suites Task: Evaluation of Gender Fairness in MT with MuST-SHE and INES
wmt-1.25
2310.19345
[ "https://github.com/hlt-mt/fbk-fairseq" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.26.bib
https://aclanthology.org/2023.wmt-1.26/
@inproceedings{firdous-rauf-2023-biomedical, title = "Biomedical Parallel Sentence Retrieval Using Large Language Models", author = "Firdous, Sheema and Rauf, Sadaf Abdul", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.26", doi = "10.18653/v1/2023.wmt-1.26", pages = "263--270", abstract = "We have explored the effect of in domain knowledge during parallel sentence filtering from in domain corpora. Models built with sentences mined from in domain corpora without domain knowledge performed poorly, whereas model performance improved by more than 2.3 BLEU points on average with further domain centric filtering. We have used Large Language Models for selecting similar and domain aligned sentences. Our experiments show the importance of inclusion of domain knowledge in sentence selection methodologies even if the initial comparable corpora are in domain.", }
We have explored the effect of in domain knowledge during parallel sentence filtering from in domain corpora. Models built with sentences mined from in domain corpora without domain knowledge performed poorly, whereas model performance improved by more than 2.3 BLEU points on average with further domain centric filtering. We have used Large Language Models for selecting similar and domain aligned sentences. Our experiments show the importance of inclusion of domain knowledge in sentence selection methodologies even if the initial comparable corpora are in domain.
[ "Firdous, Sheema", "Rauf, Sadaf Abdul" ]
Biomedical Parallel Sentence Retrieval Using Large Language Models
wmt-1.26
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.27.bib
https://aclanthology.org/2023.wmt-1.27/
@inproceedings{wu-etal-2023-path, title = "The Path to Continuous Domain Adaptation Improvements by {HW}-{TSC} for the {WMT}23 Biomedical Translation Shared Task", author = "Wu, Zhanglin and Wei, Daimeng and Li, Zongyao and Yu, Zhengzhe and Li, Shaojun and Chen, Xiaoyu and Shang, Hengchao and Guo, Jiaxin and Xie, Yuhao and Lei, Lizhi and Yang, Hao and Jiang, Yanfei", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.27", doi = "10.18653/v1/2023.wmt-1.27", pages = "271--274", abstract = "This paper presents the domain adaptation methods adopted by Huawei Translation Service Center (HW-TSC) to train the neural machine translation (NMT) system on the English↔German (en↔de) language pair of the WMT23 biomedical translation task. Our NMT system is built on deep Transformer with larger parameter sizes. Based on the biomedical NMT system trained last year, we leverage Curriculum Learning, Data Diversification, Forward translation, Back translation, and Transductive Ensemble Learning to further improve system performance. Overall, we believe our submission can achieve highly competitive result in the official final evaluation.", }
This paper presents the domain adaptation methods adopted by Huawei Translation Service Center (HW-TSC) to train the neural machine translation (NMT) system on the English↔German (en↔de) language pair of the WMT23 biomedical translation task. Our NMT system is built on deep Transformer with larger parameter sizes. Based on the biomedical NMT system trained last year, we leverage Curriculum Learning, Data Diversification, Forward translation, Back translation, and Transductive Ensemble Learning to further improve system performance. Overall, we believe our submission can achieve highly competitive result in the official final evaluation.
[ "Wu, Zhanglin", "Wei, Daimeng", "Li, Zongyao", "Yu, Zhengzhe", "Li, Shaojun", "Chen, Xiaoyu", "Shang, Hengchao", "Guo, Jiaxin", "Xie, Yuhao", "Lei, Lizhi", "Yang, Hao", "Jiang, Yanfei" ]
The Path to Continuous Domain Adaptation Improvements by HW-TSC for the WMT23 Biomedical Translation Shared Task
wmt-1.27
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.28.bib
https://aclanthology.org/2023.wmt-1.28/
@inproceedings{zhu-etal-2023-investigating, title = "Investigating Techniques for a Deeper Understanding of Neural Machine Translation ({NMT}) Systems through Data Filtering and Fine-tuning Strategies", author = "Zhu, Lichao and Zimina, Maria and B{\'e}nard, Maud and Namdar, Behnoosh and Ballier, Nicolas and Wisniewski, Guillaume and Yun{\`e}s, Jean-Baptiste", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.28", doi = "10.18653/v1/2023.wmt-1.28", pages = "275--281", abstract = "In the context of this biomedical shared task, we have implemented data filters to enhance the selection of relevant training data for fine- tuning from the available training data sources. Specifically, we have employed textometric analysis to detect repetitive segments within the test set, which we have then used for re- fining the training data used to fine-tune the mBart-50 baseline model. Through this approach, we aim to achieve several objectives: developing a practical fine-tuning strategy for training biomedical in-domain fr{\textless}{\textgreater}en models, defining criteria for filtering in-domain training data, and comparing model predictions, fine-tuning data in accordance with the test set to gain a deeper insight into the functioning of Neural Machine Translation (NMT) systems.", }
In the context of this biomedical shared task, we have implemented data filters to enhance the selection of relevant training data for fine- tuning from the available training data sources. Specifically, we have employed textometric analysis to detect repetitive segments within the test set, which we have then used for re- fining the training data used to fine-tune the mBart-50 baseline model. Through this approach, we aim to achieve several objectives: developing a practical fine-tuning strategy for training biomedical in-domain fr{\textless}{\textgreater}en models, defining criteria for filtering in-domain training data, and comparing model predictions, fine-tuning data in accordance with the test set to gain a deeper insight into the functioning of Neural Machine Translation (NMT) systems.
[ "Zhu, Lichao", "Zimina, Maria", "B{\\'e}nard, Maud", "Namdar, Behnoosh", "Ballier, Nicolas", "Wisniewski, Guillaume", "Yun{\\`e}s, Jean-Baptiste" ]
Investigating Techniques for a Deeper Understanding of Neural Machine Translation (NMT) Systems through Data Filtering and Fine-tuning Strategies
wmt-1.28
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.29.bib
https://aclanthology.org/2023.wmt-1.29/
@inproceedings{an-etal-2023-max, title = "{MAX}-{ISI} System at {WMT}23 Discourse-Level Literary Translation Task", author = "An, Li and Jin, Linghao and Ma, Xuezhe", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.29", doi = "10.18653/v1/2023.wmt-1.29", pages = "282--286", abstract = "This paper describes our translation systems for the WMT23 shared task. We participated in the discourse-level literary translation task - constrained track. In our methodology, we conduct a comparative analysis between the conventional Transformer model and the recently introduced MEGA model, which exhibits enhanced capabilities in modeling long-range sequences compared to the traditional Transformers. To explore whether language models can more effectively harness document-level context using paragraph-level data, we took the approach of aggregating sentences into paragraphs from the original literary dataset provided by the organizers. This paragraph-level data was utilized in both the Transformer and MEGA models. To ensure a fair comparison across all systems, we employed a sentence-alignment strategy to reverse our translation results from the paragraph-level back to the sentence-level alignment. Finally, our evaluation process encompassed sentence-level metrics such as BLEU, as well as two document-level metrics: d-BLEU and BlonDe.", }
This paper describes our translation systems for the WMT23 shared task. We participated in the discourse-level literary translation task - constrained track. In our methodology, we conduct a comparative analysis between the conventional Transformer model and the recently introduced MEGA model, which exhibits enhanced capabilities in modeling long-range sequences compared to the traditional Transformers. To explore whether language models can more effectively harness document-level context using paragraph-level data, we took the approach of aggregating sentences into paragraphs from the original literary dataset provided by the organizers. This paragraph-level data was utilized in both the Transformer and MEGA models. To ensure a fair comparison across all systems, we employed a sentence-alignment strategy to reverse our translation results from the paragraph-level back to the sentence-level alignment. Finally, our evaluation process encompassed sentence-level metrics such as BLEU, as well as two document-level metrics: d-BLEU and BlonDe.
[ "An, Li", "Jin, Linghao", "Ma, Xuezhe" ]
MAX-ISI System at WMT23 Discourse-Level Literary Translation Task
wmt-1.29
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.30.bib
https://aclanthology.org/2023.wmt-1.30/
@inproceedings{lopez-etal-2023-make, title = "The {MAKE}-{NMTVIZ} System Description for the {WMT}23 Literary Task", author = "Lopez, Fabien and Gonz{\'a}lez, Gabriela and Hansen, Damien and Nakhle, Mariam and Namdarzadeh, Behnoosh and Ballier, Nicolas and Dinarelli, Marco and Esperan{\c{c}}a-Rodier, Emmanuelle and He, Sui and Mohseni, Sadaf and Rossi, Caroline and Schwab, Didier and Yang, Jun and Yun{\`e}s, Jean-Baptiste and Zhu, Lichao", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.30", doi = "10.18653/v1/2023.wmt-1.30", pages = "287--295", abstract = "This paper describes the MAKE-NMTVIZ Systems trained for the WMT 2023 Literary task. As a primary submission, we used Train, Valid1, test1 as part of the GuoFeng corpus (Wang et al., 2023) to fine-tune the mBART50 model with Chinese-English data. We followed very similar training parameters to (Lee et al. 2022) when fine-tuning mBART50. We trained for 3 epochs, using gelu as an activation function, with a learning rate of 0.05, dropout of 0.1 and a batch size of 16. We decoded using a beam search of size 5. For our contrastive1 submission, we implemented a fine-tuned concatenation transformer (Lupo et al., 2023). The training was developed in two steps: (i) a sentence-level transformer was implemented for 10 epochs trained using general, test1, and valid1 data (more details in contrastive2 system); (ii) second, we fine-tuned at document-level using 3-sentence concatenation for 4 epochs using train, test2, and valid2 data. During the fine-tuning, we used ReLU as an activation function, with an inverse square root learning rate, dropout of 0.1, and a batch size of 64. We decoded using a beam search of size. Four our contrastive2 and last submission, we implemented a sentence-level transformer model (Vaswani et al., 2017). The model was trained with general data for 10 epochs using general-purpose, test1, and valid 1 data. The training parameters were an inverse square root scheduled learning rate, a dropout of 0.1, and a batch size of 64. We decoded using a beam search of size 4. We then compared the three translation outputs from an interdisciplinary perspective, investigating some of the effects of sentence- vs document-based training. Computer scientists, translators and corpus linguists discussed the linguistic remaining issues for this discourse-level literary translation.", }
This paper describes the MAKE-NMTVIZ Systems trained for the WMT 2023 Literary task. As a primary submission, we used Train, Valid1, test1 as part of the GuoFeng corpus (Wang et al., 2023) to fine-tune the mBART50 model with Chinese-English data. We followed very similar training parameters to (Lee et al. 2022) when fine-tuning mBART50. We trained for 3 epochs, using gelu as an activation function, with a learning rate of 0.05, dropout of 0.1 and a batch size of 16. We decoded using a beam search of size 5. For our contrastive1 submission, we implemented a fine-tuned concatenation transformer (Lupo et al., 2023). The training was developed in two steps: (i) a sentence-level transformer was implemented for 10 epochs trained using general, test1, and valid1 data (more details in contrastive2 system); (ii) second, we fine-tuned at document-level using 3-sentence concatenation for 4 epochs using train, test2, and valid2 data. During the fine-tuning, we used ReLU as an activation function, with an inverse square root learning rate, dropout of 0.1, and a batch size of 64. We decoded using a beam search of size. Four our contrastive2 and last submission, we implemented a sentence-level transformer model (Vaswani et al., 2017). The model was trained with general data for 10 epochs using general-purpose, test1, and valid 1 data. The training parameters were an inverse square root scheduled learning rate, a dropout of 0.1, and a batch size of 64. We decoded using a beam search of size 4. We then compared the three translation outputs from an interdisciplinary perspective, investigating some of the effects of sentence- vs document-based training. Computer scientists, translators and corpus linguists discussed the linguistic remaining issues for this discourse-level literary translation.
[ "Lopez, Fabien", "Gonz{\\'a}lez, Gabriela", "Hansen, Damien", "Nakhle, Mariam", "Namdarzadeh, Behnoosh", "Ballier, Nicolas", "Dinarelli, Marco", "Esperan{\\c{c}}a-Rodier, Emmanuelle", "He, Sui", "Mohseni, Sadaf", "Rossi, Caroline", "Schwab, Didier", "Yang, Jun", "Yun{\\`e}s, Jean-Baptiste", "Zhu, Lichao" ]
The MAKE-NMTVIZ System Description for the WMT23 Literary Task
wmt-1.30
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.31.bib
https://aclanthology.org/2023.wmt-1.31/
@inproceedings{zhao-etal-2023-dutnlp, title = "{DUTNLP} System for the {WMT}2023 Discourse-Level Literary Translation", author = "Zhao, Anqi and Huang, Kaiyu and Yu, Hao and Huang, Degen", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.31", doi = "10.18653/v1/2023.wmt-1.31", pages = "296--301", abstract = "This paper describes the submission of DUTNLP Lab submission to WMT23 Discourse-Level Literary Translation in the Chinese to English translation direction under unconstrained conditions. Our primary system aims to leverage a large language model with various prompt strategies, which can fully investigate the potential capabilities of large language models for discourse-level neural machine translation. Moreover, we test a widely used discourse-level machine translation model, G-transformer, with different training strategies. In our experimental results, the method with large language models achieves a BLEU score of 28.16, while the fine-tuned method scores 25.26. These findings indicate that selecting appropriate prompt strategies based on large language models can significantly improve translation performance compared to traditional model training methods.", }
This paper describes the submission of DUTNLP Lab submission to WMT23 Discourse-Level Literary Translation in the Chinese to English translation direction under unconstrained conditions. Our primary system aims to leverage a large language model with various prompt strategies, which can fully investigate the potential capabilities of large language models for discourse-level neural machine translation. Moreover, we test a widely used discourse-level machine translation model, G-transformer, with different training strategies. In our experimental results, the method with large language models achieves a BLEU score of 28.16, while the fine-tuned method scores 25.26. These findings indicate that selecting appropriate prompt strategies based on large language models can significantly improve translation performance compared to traditional model training methods.
[ "Zhao, Anqi", "Huang, Kaiyu", "Yu, Hao", "Huang, Degen" ]
DUTNLP System for the WMT2023 Discourse-Level Literary Translation
wmt-1.31
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.32.bib
https://aclanthology.org/2023.wmt-1.32/
@inproceedings{xie-etal-2023-hw, title = "{HW}-{TSC}{'}s Submissions to the {WMT}23 Discourse-Level Literary Translation Shared Task", author = "Xie, Yuhao and Li, Zongyao and Wu, Zhanglin and Wei, Daimeng and Chen, Xiaoyu and Rao, Zhiqiang and Li, Shaojun and Shang, Hengchao and Guo, Jiaxin and Lei, Lizhi and Yang, Hao and Jiang, Yanfei", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.32", doi = "10.18653/v1/2023.wmt-1.32", pages = "302--306", abstract = "This paper introduces HW-TSC{'}s submission to the WMT23 Discourse-Level Literary Translation shared task. We use standard sentence-level transformer as a baseline, and perform domain adaptation and discourse modeling to enhance discourse-level capabilities. Regarding domain adaptation, we employ Back-Translation, Forward-Translation and Data Diversification. For discourse modeling, we apply strategies such as Multi-resolutional Document-to-Document Translation and TrAining Data Augmentation.", }
This paper introduces HW-TSC{'}s submission to the WMT23 Discourse-Level Literary Translation shared task. We use standard sentence-level transformer as a baseline, and perform domain adaptation and discourse modeling to enhance discourse-level capabilities. Regarding domain adaptation, we employ Back-Translation, Forward-Translation and Data Diversification. For discourse modeling, we apply strategies such as Multi-resolutional Document-to-Document Translation and TrAining Data Augmentation.
[ "Xie, Yuhao", "Li, Zongyao", "Wu, Zhanglin", "Wei, Daimeng", "Chen, Xiaoyu", "Rao, Zhiqiang", "Li, Shaojun", "Shang, Hengchao", "Guo, Jiaxin", "Lei, Lizhi", "Yang, Hao", "Jiang, Yanfei" ]
HW-TSC's Submissions to the WMT23 Discourse-Level Literary Translation Shared Task
wmt-1.32
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.33.bib
https://aclanthology.org/2023.wmt-1.33/
@inproceedings{zhu-xiong-2023-tjunlp, title = "{TJUNLP}:System Description for the {WMT}23 Literary Task in {C}hinese to {E}nglish Translation Direction", author = "Zhu, Shaolin and Xiong, Deyi", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.33", doi = "10.18653/v1/2023.wmt-1.33", pages = "307--311", abstract = "This paper introduces the overall situation of the Natural Language Processing Laboratory of Tianjin University participating in the WMT23 machine translation evaluation task from Chinese to English. For this evaluation, the base model used is a Transformer based on a Mixture of Experts (MOE) model. During the model{'}s construction and training, a basic dense model based on Transformer is first trained on the training set. Then, this model is used to initialize the MOE-based translation model, which is further trained on the training corpus. Since the training dataset provided for this translation task is relatively small, to better utilize sparse models to enhance translation, we employed a data augmentation technique for alignment. Experimental results show that this method can effectively improve neural machine translation performance.", }
This paper introduces the overall situation of the Natural Language Processing Laboratory of Tianjin University participating in the WMT23 machine translation evaluation task from Chinese to English. For this evaluation, the base model used is a Transformer based on a Mixture of Experts (MOE) model. During the model{'}s construction and training, a basic dense model based on Transformer is first trained on the training set. Then, this model is used to initialize the MOE-based translation model, which is further trained on the training corpus. Since the training dataset provided for this translation task is relatively small, to better utilize sparse models to enhance translation, we employed a data augmentation technique for alignment. Experimental results show that this method can effectively improve neural machine translation performance.
[ "Zhu, Shaolin", "Xiong, Deyi" ]
TJUNLP:System Description for the WMT23 Literary Task in Chinese to English Translation Direction
wmt-1.33
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.34.bib
https://aclanthology.org/2023.wmt-1.34/
@inproceedings{doumbouya-etal-2023-machine, title = "Machine Translation for Nko: Tools, Corpora, and Baseline Results", author = "Doumbouya, Moussa and Dian{\'e}, Baba Mamadi and Ciss{\'e}, Solo Farabado and Dian{\'e}, Djibrila and Sow, Abdoulaye and Doumbouya, S{\'e}r{\'e} Moussa and Bangoura, Daouda and Bayo, Fod{\'e} Moriba and Conde, Ibrahima Sory and Dian{\'e}, Kalo Mory and Piech, Chris and Manning, Christopher", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.34", doi = "10.18653/v1/2023.wmt-1.34", pages = "312--343", abstract = "Currently, there is no usable machine translation system for Nko, a language spoken by tens of millions of people across multiple West African countries, which holds significant cultural and educational value. To address this issue, we present a set of tools, resources, and baseline results aimed towards the development of usable machine translation systems for Nko and other languages that do not currently have sufficiently large parallel text corpora available. (1) Fria$\parallel$el: A novel collaborative parallel text curation software that incorporates quality control through copyedit-based workflows. (2) Expansion of the FLoRes-200 and NLLB-Seed corpora with 2,009 and 6,193 high-quality Nko translations in parallel with 204 and 40 other languages. (3) nicolingua-0005: A collection of trilingual and bilingual corpora with 130,850 parallel segments and monolingual corpora containing over 3 million Nko words. (4) Baseline bilingual and multilingual neural machine translation results with the best model scoring 30.83 English-Nko chrF++ on FLoRes-devtest.", }
Currently, there is no usable machine translation system for Nko, a language spoken by tens of millions of people across multiple West African countries, which holds significant cultural and educational value. To address this issue, we present a set of tools, resources, and baseline results aimed towards the development of usable machine translation systems for Nko and other languages that do not currently have sufficiently large parallel text corpora available. (1) Fria$\parallel$el: A novel collaborative parallel text curation software that incorporates quality control through copyedit-based workflows. (2) Expansion of the FLoRes-200 and NLLB-Seed corpora with 2,009 and 6,193 high-quality Nko translations in parallel with 204 and 40 other languages. (3) nicolingua-0005: A collection of trilingual and bilingual corpora with 130,850 parallel segments and monolingual corpora containing over 3 million Nko words. (4) Baseline bilingual and multilingual neural machine translation results with the best model scoring 30.83 English-Nko chrF++ on FLoRes-devtest.
[ "Doumbouya, Moussa", "Dian{\\'e}, Baba Mamadi", "Ciss{\\'e}, Solo Farabado", "Dian{\\'e}, Djibrila", "Sow, Abdoulaye", "Doumbouya, S{\\'e}r{\\'e} Moussa", "Bangoura, Daouda", "Bayo, Fod{\\'e} Moriba", "Conde, Ibrahima Sory", "Dian{\\'e}, Kalo Mory", "Piech, Chris", "Manning, Christopher" ]
Machine Translation for Nko: Tools, Corpora, and Baseline Results
wmt-1.34
[ "https://github.com/common-parallel-corpora/friallel" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.35.bib
https://aclanthology.org/2023.wmt-1.35/
@inproceedings{sandoval-castaneda-etal-2023-ttics, title = "{TTIC}{'}s Submission to {WMT}-{SLT} 23", author = "Sandoval-Castaneda, Marcelo and Li, Yanhong and Shi, Bowen and Brentari, Diane and Livescu, Karen and Shakhnarovich, Gregory", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.35", doi = "10.18653/v1/2023.wmt-1.35", pages = "344--350", abstract = "In this paper, we describe TTIC{'}s submission to WMT 2023 Sign Language Translation task on the Swiss-German Sign Language (DSGS) to German track. Our approach explores the advantages of using large-scale self-supervised pre-training in the task of sign language translation, over more traditional approaches that rely heavily on supervision, along with costly labels such as gloss annotations. The proposed model consists of a VideoSwin transformer for image encoding, and a T5 model adapted to receive VideoSwin features as input instead of text. In WMT-SLT 22{'}s development set, this system achieves 2.03 BLEU score, a 59{\%} increase over the previous best reported performance. In the official test set, our primary submission achieves 1.1 BLEU score and 17.0 chrF score.", }
In this paper, we describe TTIC{'}s submission to WMT 2023 Sign Language Translation task on the Swiss-German Sign Language (DSGS) to German track. Our approach explores the advantages of using large-scale self-supervised pre-training in the task of sign language translation, over more traditional approaches that rely heavily on supervision, along with costly labels such as gloss annotations. The proposed model consists of a VideoSwin transformer for image encoding, and a T5 model adapted to receive VideoSwin features as input instead of text. In WMT-SLT 22{'}s development set, this system achieves 2.03 BLEU score, a 59{\%} increase over the previous best reported performance. In the official test set, our primary submission achieves 1.1 BLEU score and 17.0 chrF score.
[ "S", "oval-Castaneda, Marcelo", "Li, Yanhong", "Shi, Bowen", "Brentari, Diane", "Livescu, Karen", "Shakhnarovich, Gregory" ]
TTIC's Submission to WMT-SLT 23
wmt-1.35
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.36.bib
https://aclanthology.org/2023.wmt-1.36/
@inproceedings{xu-etal-2023-knowcomp, title = "{K}now{C}omp Submission for {WMT}23 Sign Language Translation Task", author = "Xu, Baixuan and Shi, Haochen and Zheng, Tianshi and Zong, Qing and Wang, Weiqi and Wang, Zhaowei and Song, Yangqiu", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.36", doi = "10.18653/v1/2023.wmt-1.36", pages = "351--358", abstract = "Sign Language Translation (SLT) is a complex task that involves accurately interpreting sign language gestures and translating them into spoken or written language and vice versa. Its primary objective is to facilitate communication between individuals with hearing difficulties using deep learning systems. Existing approaches leverage gloss annotations of sign language gestures to assist the model in capturing the movement and differentiating various gestures. However, constructing a large-scale gloss-annotated dataset is both expensive and impractical to cover multiple languages, and pre-trained generative models cannot be efficiently used due to the lack of textual source context in SLT. To address these challenges, we propose a gloss-free framework for the WMT23 SLT task. Our system primarily consists of a visual extractor for extracting video embeddings and a generator responsible for producing the translated text. We also employ an embedding alignment block that is trained to align the embedding space of the visual extractor with that of the generator. Despite undergoing extensive training and validation, our system consistently falls short of meeting the baseline performance. Further analysis shows that our model{'}s poor projection rate prevents it from learning diverse visual embeddings. Our codes and model checkpoints are available at https://github.com/HKUST-KnowComp/SLT.", }
Sign Language Translation (SLT) is a complex task that involves accurately interpreting sign language gestures and translating them into spoken or written language and vice versa. Its primary objective is to facilitate communication between individuals with hearing difficulties using deep learning systems. Existing approaches leverage gloss annotations of sign language gestures to assist the model in capturing the movement and differentiating various gestures. However, constructing a large-scale gloss-annotated dataset is both expensive and impractical to cover multiple languages, and pre-trained generative models cannot be efficiently used due to the lack of textual source context in SLT. To address these challenges, we propose a gloss-free framework for the WMT23 SLT task. Our system primarily consists of a visual extractor for extracting video embeddings and a generator responsible for producing the translated text. We also employ an embedding alignment block that is trained to align the embedding space of the visual extractor with that of the generator. Despite undergoing extensive training and validation, our system consistently falls short of meeting the baseline performance. Further analysis shows that our model{'}s poor projection rate prevents it from learning diverse visual embeddings. Our codes and model checkpoints are available at https://github.com/HKUST-KnowComp/SLT.
[ "Xu, Baixuan", "Shi, Haochen", "Zheng, Tianshi", "Zong, Qing", "Wang, Weiqi", "Wang, Zhaowei", "Song, Yangqiu" ]
KnowComp Submission for WMT23 Sign Language Translation Task
wmt-1.36
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.37.bib
https://aclanthology.org/2023.wmt-1.37/
@inproceedings{minh-cong-etal-2023-fast, title = "A Fast Method to Filter Noisy Parallel Data {WMT}2023 Shared Task on Parallel Data Curation", author = "Minh-Cong, Nguyen-Hoang and Vinh, Nguyen Van and Le-Minh, Nguyen", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.37", doi = "10.18653/v1/2023.wmt-1.37", pages = "359--365", abstract = "The effectiveness of a machine translation (MT) system is intricately linked to the quality of its training dataset. In an era where websites offer an extensive repository of translations such as movie subtitles, stories, and TED Talks, the fundamental challenge resides in pinpointing the sentence pairs or documents that represent accurate translations of each other. This paper presents the results of our submission to the shared task WMT2023 (Sloto et al., 2023), which aimed to evaluate parallel data curation methods for improving the MT system. The task involved alignment and filtering data to create high-quality parallel corpora for training and evaluating the MT models. Our approach leveraged a combination of dictionary and rule-based methods to ensure data quality and consistency. We achieved an improvement with the highest 1.6 BLEU score compared to the baseline system. Significantly, our approach showed consistent improvements across all test sets, suggesting its efficiency.", }
The effectiveness of a machine translation (MT) system is intricately linked to the quality of its training dataset. In an era where websites offer an extensive repository of translations such as movie subtitles, stories, and TED Talks, the fundamental challenge resides in pinpointing the sentence pairs or documents that represent accurate translations of each other. This paper presents the results of our submission to the shared task WMT2023 (Sloto et al., 2023), which aimed to evaluate parallel data curation methods for improving the MT system. The task involved alignment and filtering data to create high-quality parallel corpora for training and evaluating the MT models. Our approach leveraged a combination of dictionary and rule-based methods to ensure data quality and consistency. We achieved an improvement with the highest 1.6 BLEU score compared to the baseline system. Significantly, our approach showed consistent improvements across all test sets, suggesting its efficiency.
[ "Minh-Cong, Nguyen-Hoang", "Vinh, Nguyen Van", "Le-Minh, Nguyen" ]
A Fast Method to Filter Noisy Parallel Data WMT2023 Shared Task on Parallel Data Curation
wmt-1.37
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.38.bib
https://aclanthology.org/2023.wmt-1.38/
@inproceedings{steingrimsson-2023-sentence, title = "A Sentence Alignment Approach to Document Alignment and Multi-faceted Filtering for Curating Parallel Sentence Pairs from Web-crawled Data", author = "Steingrimsson, Steinthor", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.38", doi = "10.18653/v1/2023.wmt-1.38", pages = "366--374", abstract = "This paper describes the AST submission to the WMT23 Shared Task on Parallel Data Curation. We experiment with two approaches for curating data from the provided web-scraped texts. We use sentence alignment to identify document alignments in the data and extract parallel sentence pairs from the aligned documents. All other sentences, not aligned in that step, are paired based on cosine similarity before we apply various different filters. For filtering, we use language detection, fluency classification, word alignments, cosine distance as calculated by multilingual sentence embedding models, and Bicleaner AI. Our best model outperforms the baseline by 1.9 BLEU points on average over the four provided evaluation sets.", }
This paper describes the AST submission to the WMT23 Shared Task on Parallel Data Curation. We experiment with two approaches for curating data from the provided web-scraped texts. We use sentence alignment to identify document alignments in the data and extract parallel sentence pairs from the aligned documents. All other sentences, not aligned in that step, are paired based on cosine similarity before we apply various different filters. For filtering, we use language detection, fluency classification, word alignments, cosine distance as calculated by multilingual sentence embedding models, and Bicleaner AI. Our best model outperforms the baseline by 1.9 BLEU points on average over the four provided evaluation sets.
[ "Steingrimsson, Steinthor" ]
A Sentence Alignment Approach to Document Alignment and Multi-faceted Filtering for Curating Parallel Sentence Pairs from Web-crawled Data
wmt-1.38
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.39.bib
https://aclanthology.org/2023.wmt-1.39/
@inproceedings{petrick-etal-2023-document, title = "Document-Level Language Models for Machine Translation", author = "Petrick, Frithjof and Herold, Christian and Petrushkov, Pavel and Khadivi, Shahram and Ney, Hermann", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.39", doi = "10.18653/v1/2023.wmt-1.39", pages = "375--391", abstract = "Despite the known limitations, most machine translation systems today still operate on the sentence-level. One reason for this is, that most parallel training data is only sentence-level aligned, without document-level meta information available. In this work, we set out to build context-aware translation systems utilizing document-level monolingual data instead. This can be achieved by combining any existing sentence-level translation model with a document-level language model. We improve existing approaches by leveraging recent advancements in model combination. Additionally, we propose novel weighting techniques that make the system combination more flexible and significantly reduce computational overhead. In a comprehensive evaluation on four diverse translation tasks, we show that our extensions improve document-targeted scores significantly and are also computationally more efficient. However, we also find that in most scenarios, back-translation gives even better results, at the cost of having to re-train the translation system. Finally, we explore language model fusion in the light of recent advancements in large language models. Our findings suggest that there might be strong potential in utilizing large language models via model combination.", }
Despite the known limitations, most machine translation systems today still operate on the sentence-level. One reason for this is, that most parallel training data is only sentence-level aligned, without document-level meta information available. In this work, we set out to build context-aware translation systems utilizing document-level monolingual data instead. This can be achieved by combining any existing sentence-level translation model with a document-level language model. We improve existing approaches by leveraging recent advancements in model combination. Additionally, we propose novel weighting techniques that make the system combination more flexible and significantly reduce computational overhead. In a comprehensive evaluation on four diverse translation tasks, we show that our extensions improve document-targeted scores significantly and are also computationally more efficient. However, we also find that in most scenarios, back-translation gives even better results, at the cost of having to re-train the translation system. Finally, we explore language model fusion in the light of recent advancements in large language models. Our findings suggest that there might be strong potential in utilizing large language models via model combination.
[ "Petrick, Frithjof", "Herold, Christian", "Petrushkov, Pavel", "Khadivi, Shahram", "Ney, Hermann" ]
Document-Level Language Models for Machine Translation
wmt-1.39
2310.12303
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.40.bib
https://aclanthology.org/2023.wmt-1.40/
@inproceedings{robinson-etal-2023-chatgpt, title = "{C}hat{GPT} {MT}: Competitive for High- (but Not Low-) Resource Languages", author = "Robinson, Nathaniel and Ogayo, Perez and Mortensen, David R. and Neubig, Graham", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.40", doi = "10.18653/v1/2023.wmt-1.40", pages = "392--418", abstract = "Large language models (LLMs) implicitly learn to perform a range of language tasks, including machine translation (MT). Previous studies explore aspects of LLMs{'} MT capabilities. However, there exist a wide variety of languages for which recent LLM MT performance has never before been evaluated. Without published experimental evidence on the matter, it is difficult for speakers of the world{'}s diverse languages to know how and whether they can use LLMs for their languages. We present the first experimental evidence for an expansive set of 204 languages, along with MT cost analysis, using the FLORES-200 benchmark. Trends reveal that GPT models approach or exceed traditional MT model performance for some high-resource languages (HRLs) but consistently lag for low-resource languages (LRLs), under-performing traditional MT for 84.1{\%} of languages we covered. Our analysis reveals that a language{'}s resource level is the most important feature in determining ChatGPT{'}s relative ability to translate it, and suggests that ChatGPT is especially disadvantaged for LRLs and African languages.", }
Large language models (LLMs) implicitly learn to perform a range of language tasks, including machine translation (MT). Previous studies explore aspects of LLMs{'} MT capabilities. However, there exist a wide variety of languages for which recent LLM MT performance has never before been evaluated. Without published experimental evidence on the matter, it is difficult for speakers of the world{'}s diverse languages to know how and whether they can use LLMs for their languages. We present the first experimental evidence for an expansive set of 204 languages, along with MT cost analysis, using the FLORES-200 benchmark. Trends reveal that GPT models approach or exceed traditional MT model performance for some high-resource languages (HRLs) but consistently lag for low-resource languages (LRLs), under-performing traditional MT for 84.1{\%} of languages we covered. Our analysis reveals that a language{'}s resource level is the most important feature in determining ChatGPT{'}s relative ability to translate it, and suggests that ChatGPT is especially disadvantaged for LRLs and African languages.
[ "Robinson, Nathaniel", "Ogayo, Perez", "Mortensen, David R.", "Neubig, Graham" ]
ChatGPT MT: Competitive for High- (but Not Low-) Resource Languages
wmt-1.40
2309.07423
[ "https://github.com/cmu-llab/gpt_mt_benchmark" ]
https://huggingface.co/papers/2309.07423
0
0
0
4
[]
[]
[]
1
Poster
https://aclanthology.org/2023.wmt-1.41.bib
https://aclanthology.org/2023.wmt-1.41/
@inproceedings{karpinska-iyyer-2023-large, title = "Large Language Models Effectively Leverage Document-level Context for Literary Translation, but Critical Errors Persist", author = "Karpinska, Marzena and Iyyer, Mohit", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.41", doi = "10.18653/v1/2023.wmt-1.41", pages = "419--451", abstract = "Large language models (LLMs) are competitive with the state of the art on a wide range of sentence-level translation datasets. However, their ability to translate paragraphs and documents remains unexplored because evaluation in these settings is costly and difficult. We show through a rigorous human evaluation that asking the GPT-3.5 (text-davinci-003) LLM to translate an entire literary paragraph (e.g., from a novel) at once results in higher-quality translations than standard sentence-by-sentence translation across 18 linguistically-diverse language pairs (e.g., translating into and out of Japanese, Polish, and English). Our evaluation, which took approximately 350 hours of effort for annotation and analysis, is conducted by hiring translators fluent in both the source and target language and asking them to provide both span-level error annotations as well as preference judgments of which system{'}s translations are better. We observe that discourse-level LLM translators commit fewer mistranslations, grammar errors, and stylistic inconsistencies than sentence-level approaches. With that said, critical errors still abound, including occasional content omissions, and a human translator{'}s intervention remains necessary to ensure that the author{'}s voice remains intact. We publicly release our dataset and error annotations to spur future research on the evaluation of document-level literary translation.", }
Large language models (LLMs) are competitive with the state of the art on a wide range of sentence-level translation datasets. However, their ability to translate paragraphs and documents remains unexplored because evaluation in these settings is costly and difficult. We show through a rigorous human evaluation that asking the GPT-3.5 (text-davinci-003) LLM to translate an entire literary paragraph (e.g., from a novel) at once results in higher-quality translations than standard sentence-by-sentence translation across 18 linguistically-diverse language pairs (e.g., translating into and out of Japanese, Polish, and English). Our evaluation, which took approximately 350 hours of effort for annotation and analysis, is conducted by hiring translators fluent in both the source and target language and asking them to provide both span-level error annotations as well as preference judgments of which system{'}s translations are better. We observe that discourse-level LLM translators commit fewer mistranslations, grammar errors, and stylistic inconsistencies than sentence-level approaches. With that said, critical errors still abound, including occasional content omissions, and a human translator{'}s intervention remains necessary to ensure that the author{'}s voice remains intact. We publicly release our dataset and error annotations to spur future research on the evaluation of document-level literary translation.
[ "Karpinska, Marzena", "Iyyer, Mohit" ]
Large Language Models Effectively Leverage Document-level Context for Literary Translation, but Critical Errors Persist
wmt-1.41
2304.03245
[ "https://github.com/marzenakrp/literarytranslation" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.42.bib
https://aclanthology.org/2023.wmt-1.42/
@inproceedings{wicks-post-2023-identifying, title = "Identifying Context-Dependent Translations for Evaluation Set Production", author = "Wicks, Rachel and Post, Matt", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.42", doi = "10.18653/v1/2023.wmt-1.42", pages = "452--467", abstract = "A major impediment to the transition to contextual machine translation is the absence of good evaluation metrics and test sets. Sentences that require context to be translated correctly are rare in test sets, reducing the utility of standard corpus-level metrics such as COMET or BLEU. On the other hand, datasets that annotate such sentences are also rare, small in scale, and available for only a few languages. To address this, we modernize, generalize, and extend previous annotation pipelines to produce MultiPro, a tool that identifies subsets of parallel documents containing sentences that require context to correctly translate five phenomena: gender, formality, and animacy for pronouns, verb phrase ellipsis, and ambiguous noun inflections. The input to the pipeline is a set of hand-crafted, per-language, linguistically-informed rules that select contextual sentence pairs using coreference, part-of-speech, and morphological features provided by state-of-the-art tools. We apply this pipeline to seven languages pairs (EN into and out-of DE, ES, FR, IT, PL, PT, and RU) and two datasets (OpenSubtitles and WMT test sets), and validate its performance using both overlap with previous work and its ability to discriminate a contextual MT system from a sentence-based one. We release the MultiPro pipeline and data as open source.", }
A major impediment to the transition to contextual machine translation is the absence of good evaluation metrics and test sets. Sentences that require context to be translated correctly are rare in test sets, reducing the utility of standard corpus-level metrics such as COMET or BLEU. On the other hand, datasets that annotate such sentences are also rare, small in scale, and available for only a few languages. To address this, we modernize, generalize, and extend previous annotation pipelines to produce MultiPro, a tool that identifies subsets of parallel documents containing sentences that require context to correctly translate five phenomena: gender, formality, and animacy for pronouns, verb phrase ellipsis, and ambiguous noun inflections. The input to the pipeline is a set of hand-crafted, per-language, linguistically-informed rules that select contextual sentence pairs using coreference, part-of-speech, and morphological features provided by state-of-the-art tools. We apply this pipeline to seven languages pairs (EN into and out-of DE, ES, FR, IT, PL, PT, and RU) and two datasets (OpenSubtitles and WMT test sets), and validate its performance using both overlap with previous work and its ability to discriminate a contextual MT system from a sentence-based one. We release the MultiPro pipeline and data as open source.
[ "Wicks, Rachel", "Post, Matt" ]
Identifying Context-Dependent Translations for Evaluation Set Production
wmt-1.42
2311.02321
[ "https://github.com/rewicks/ctxpro" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.43.bib
https://aclanthology.org/2023.wmt-1.43/
@inproceedings{zhang-etal-2023-machine, title = "Machine Translation with Large Language Models: Prompting, Few-shot Learning, and Fine-tuning with {QL}o{RA}", author = "Zhang, Xuan and Rajabi, Navid and Duh, Kevin and Koehn, Philipp", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.43", doi = "10.18653/v1/2023.wmt-1.43", pages = "468--481", abstract = "While large language models have made remarkable advancements in natural language generation, their potential in machine translation, especially when fine-tuned, remains under-explored. In our study, we conduct comprehensive experiments, evaluating 15 publicly available language models on machine translation tasks. We compare the performance across three methodologies: zero-shot prompting, few-shot learning, and fine-tuning. Central to our approach is the use of QLoRA, an efficient fine-tuning method. On French-English, QLoRA fine-tuning outperforms both few-shot learning and models trained from scratch. This superiority is highlighted in both sentence-level and document-level translations, with a significant BLEU score improvement of 28.93 over the prompting method. Impressively, with QLoRA, the enhanced performance is achieved by fine-tuning a mere 0.77{\%} of the model{'}s parameters.", }
While large language models have made remarkable advancements in natural language generation, their potential in machine translation, especially when fine-tuned, remains under-explored. In our study, we conduct comprehensive experiments, evaluating 15 publicly available language models on machine translation tasks. We compare the performance across three methodologies: zero-shot prompting, few-shot learning, and fine-tuning. Central to our approach is the use of QLoRA, an efficient fine-tuning method. On French-English, QLoRA fine-tuning outperforms both few-shot learning and models trained from scratch. This superiority is highlighted in both sentence-level and document-level translations, with a significant BLEU score improvement of 28.93 over the prompting method. Impressively, with QLoRA, the enhanced performance is achieved by fine-tuning a mere 0.77{\%} of the model{'}s parameters.
[ "Zhang, Xuan", "Rajabi, Navid", "Duh, Kevin", "Koehn, Philipp" ]
Machine Translation with Large Language Models: Prompting, Few-shot Learning, and Fine-tuning with QLoRA
wmt-1.43
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.44.bib
https://aclanthology.org/2023.wmt-1.44/
@inproceedings{iyer-etal-2023-towards, title = "Towards Effective Disambiguation for Machine Translation with Large Language Models", author = "Iyer, Vivek and Chen, Pinzhen and Birch, Alexandra", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.44", doi = "10.18653/v1/2023.wmt-1.44", pages = "482--495", abstract = "Resolving semantic ambiguity has long been recognised as a central challenge in the field of Machine Translation. Recent work on benchmarking translation performance on ambiguous sentences has exposed the limitations of conventional Neural Machine Translation (NMT) systems, which fail to handle many such cases. Large language models (LLMs) have emerged as a promising alternative, demonstrating comparable performance to traditional NMT models while introducing new paradigms for controlling the target outputs. In this paper, we study the capabilities of LLMs to translate {``}ambiguous sentences{''} - i.e. those containing highly polysemous words and/or rare word senses. We also propose two ways to improve their disambiguation capabilities, through a) in-context learning and b) fine-tuning on carefully curated ambiguous datasets. Experiments show that our methods can match or outperform state-of-the-art systems such as DeepL and NLLB in four out of five language directions. Our research provides valuable insights into effectively adapting LLMs to become better disambiguators during Machine Translation. We release our curated disambiguation corpora and resources at https://data.statmt.org/ambiguous-europarl.", }
Resolving semantic ambiguity has long been recognised as a central challenge in the field of Machine Translation. Recent work on benchmarking translation performance on ambiguous sentences has exposed the limitations of conventional Neural Machine Translation (NMT) systems, which fail to handle many such cases. Large language models (LLMs) have emerged as a promising alternative, demonstrating comparable performance to traditional NMT models while introducing new paradigms for controlling the target outputs. In this paper, we study the capabilities of LLMs to translate {``}ambiguous sentences{''} - i.e. those containing highly polysemous words and/or rare word senses. We also propose two ways to improve their disambiguation capabilities, through a) in-context learning and b) fine-tuning on carefully curated ambiguous datasets. Experiments show that our methods can match or outperform state-of-the-art systems such as DeepL and NLLB in four out of five language directions. Our research provides valuable insights into effectively adapting LLMs to become better disambiguators during Machine Translation. We release our curated disambiguation corpora and resources at https://data.statmt.org/ambiguous-europarl.
[ "Iyer, Vivek", "Chen, Pinzhen", "Birch, Alex", "ra" ]
Towards Effective Disambiguation for Machine Translation with Large Language Models
wmt-1.44
2309.11668
[ "" ]
https://huggingface.co/papers/2309.11668
1
1
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.wmt-1.45.bib
https://aclanthology.org/2023.wmt-1.45/
@inproceedings{zhang-etal-2023-closer, title = "A Closer Look at Transformer Attention for Multilingual Translation", author = "Zhang, Jingyi and de Melo, Gerard and Xu, Hongfei and Chen, Kehai", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.45", doi = "10.18653/v1/2023.wmt-1.45", pages = "496--506", abstract = "Transformers are the predominant model for machine translation. Recent works also showed that a single Transformer model can be trained to learn translation for multiple different language pairs, achieving promising results. In this work, we investigate how the multilingual Transformer model pays attention for translating different language pairs. We first performed automatic pruning to eliminate a large number of noisy heads and then analyzed the functions and behaviors of the remaining heads in both self-attention and cross-attention. We find that different language pairs, in spite of having different syntax and word orders, tended to share the same heads for the same functions, such as syntax heads and reordering heads. However, the different characteristics of different language pairs clearly caused interference in function heads and affected head accuracies. Additionally, we reveal an interesting behavior of the Transformer cross-attention: the deep-layer cross-attention heads work in a clear cooperative way to learn different options for word reordering, which can be caused by the nature of translation tasks having multiple different gold translations in the target language for the same source sentence.", }
Transformers are the predominant model for machine translation. Recent works also showed that a single Transformer model can be trained to learn translation for multiple different language pairs, achieving promising results. In this work, we investigate how the multilingual Transformer model pays attention for translating different language pairs. We first performed automatic pruning to eliminate a large number of noisy heads and then analyzed the functions and behaviors of the remaining heads in both self-attention and cross-attention. We find that different language pairs, in spite of having different syntax and word orders, tended to share the same heads for the same functions, such as syntax heads and reordering heads. However, the different characteristics of different language pairs clearly caused interference in function heads and affected head accuracies. Additionally, we reveal an interesting behavior of the Transformer cross-attention: the deep-layer cross-attention heads work in a clear cooperative way to learn different options for word reordering, which can be caused by the nature of translation tasks having multiple different gold translations in the target language for the same source sentence.
[ "Zhang, Jingyi", "de Melo, Gerard", "Xu, Hongfei", "Chen, Kehai" ]
A Closer Look at Transformer Attention for Multilingual Translation
wmt-1.45
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.46.bib
https://aclanthology.org/2023.wmt-1.46/
@inproceedings{schmidt-di-gangi-2023-bridging, title = "Bridging the Gap between Position-Based and Content-Based Self-Attention for Neural Machine Translation", author = "Schmidt, Felix and Di Gangi, Mattia", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.46", doi = "10.18653/v1/2023.wmt-1.46", pages = "507--521", abstract = "Position-based token-mixing approaches, such as FNet and MLPMixer, have shown to be exciting attention alternatives for computer vision and natural language understanding. The motivation is usually to remove redundant operations for higher efficiency on consumer GPUs while maintaining Transformer quality. On the hardware side, research on memristive crossbar arrays shows the possibility of efficiency gains up to two orders of magnitude by performing in-memory computation with weights stored on device. While it is impossible to store dynamic attention weights based on token-token interactions on device, position-based weights represent a concrete alternative if they only lead to minimal degradation. In this paper, we propose position-based attention as a variant of multi-head attention where the attention weights are computed from position representations. A naive replacement of token vectors with position vectors in self-attention results in a significant loss in translation quality, which can be recovered by using relative position representations and a gating mechanism. We show analytically that this gating mechanism introduces some form of word dependency and validate its effectiveness experimentally under various conditions. The resulting network, rPosNet, outperforms previous position-based approaches and matches the quality of the Transformer with relative position embedding while requiring 20{\%} less attention parameters after training.", }
Position-based token-mixing approaches, such as FNet and MLPMixer, have shown to be exciting attention alternatives for computer vision and natural language understanding. The motivation is usually to remove redundant operations for higher efficiency on consumer GPUs while maintaining Transformer quality. On the hardware side, research on memristive crossbar arrays shows the possibility of efficiency gains up to two orders of magnitude by performing in-memory computation with weights stored on device. While it is impossible to store dynamic attention weights based on token-token interactions on device, position-based weights represent a concrete alternative if they only lead to minimal degradation. In this paper, we propose position-based attention as a variant of multi-head attention where the attention weights are computed from position representations. A naive replacement of token vectors with position vectors in self-attention results in a significant loss in translation quality, which can be recovered by using relative position representations and a gating mechanism. We show analytically that this gating mechanism introduces some form of word dependency and validate its effectiveness experimentally under various conditions. The resulting network, rPosNet, outperforms previous position-based approaches and matches the quality of the Transformer with relative position embedding while requiring 20{\%} less attention parameters after training.
[ "Schmidt, Felix", "Di Gangi, Mattia" ]
Bridging the Gap between Position-Based and Content-Based Self-Attention for Neural Machine Translation
wmt-1.46
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.47.bib
https://aclanthology.org/2023.wmt-1.47/
@inproceedings{hirasawa-etal-2023-visual, title = "Visual Prediction Improves Zero-Shot Cross-Modal Machine Translation", author = "Hirasawa, Tosho and Bugliarello, Emanuele and Elliott, Desmond and Komachi, Mamoru", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.47", doi = "10.18653/v1/2023.wmt-1.47", pages = "522--535", abstract = "Multimodal machine translation (MMT) systems have been successfully developed in recent years for a few language pairs. However, training such models usually requires tuples of a source language text, target language text, and images. Obtaining these data involves expensive human annotations, making it difficult to develop models for unseen text-only language pairs. In this work, we propose the task of zero-shot cross-modal machine translation aiming to transfer multimodal knowledge from an existing multimodal parallel corpus into a new translation direction. We also introduce a novel MMT model with a visual prediction network to learn visual features grounded on multimodal parallel data and provide pseudo-features for text-only language pairs. With this training paradigm, our MMT model outperforms its text-only counterpart. In our extensive analyses, we show that (i) the selection of visual features is important, and (ii) training on image-aware translations and being grounded on a similar language pair are mandatory.", }
Multimodal machine translation (MMT) systems have been successfully developed in recent years for a few language pairs. However, training such models usually requires tuples of a source language text, target language text, and images. Obtaining these data involves expensive human annotations, making it difficult to develop models for unseen text-only language pairs. In this work, we propose the task of zero-shot cross-modal machine translation aiming to transfer multimodal knowledge from an existing multimodal parallel corpus into a new translation direction. We also introduce a novel MMT model with a visual prediction network to learn visual features grounded on multimodal parallel data and provide pseudo-features for text-only language pairs. With this training paradigm, our MMT model outperforms its text-only counterpart. In our extensive analyses, we show that (i) the selection of visual features is important, and (ii) training on image-aware translations and being grounded on a similar language pair are mandatory.
[ "Hirasawa, Tosho", "Bugliarello, Emanuele", "Elliott, Desmond", "Komachi, Mamoru" ]
Visual Prediction Improves Zero-Shot Cross-Modal Machine Translation
wmt-1.47
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.48.bib
https://aclanthology.org/2023.wmt-1.48/
@inproceedings{muller-etal-2023-gender, title = "The Gender-{GAP} Pipeline: A Gender-Aware Polyglot Pipeline for Gender Characterisation in 55 Languages", author = "Muller, Benjamin and Alastruey, Belen and Hansanti, Prangthip and Kalbassi, Elahe and Ropers, Christophe and Smith, Eric and Williams, Adina and Zettlemoyer, Luke and Andrews, Pierre and Costa-juss{\`a}, Marta R.", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.48", doi = "10.18653/v1/2023.wmt-1.48", pages = "536--550", abstract = "Gender biases in language generation systems are challenging to mitigate. One possible source for these biases is gender representation disparities in the training and evaluation data. Despite recent progress in documenting this problem and many attempts at mitigating it, we still lack shared methodology and tooling to report gender representation in large datasets. Such quantitative reporting will enable further mitigation, e.g., via data augmentation. This paper describes the Gender-Gap Pipeline (for Gender-Aware Polyglot Pipeline), an automatic pipeline to characterize gender representation in large-scale datasets for 55 languages. The pipeline uses a multilingual lexicon of gendered person-nouns to quantify the gender representation in text. We showcase it to report gender representation in WMT training data and development data for the News task, confirming that current data is skewed towards masculine representation. Having unbalanced datasets may indirectly optimize our systems towards outperforming one gender over the others. We suggest introducing our gender quantification pipeline in current datasets and, ideally, modifying them toward a balanced representation.", }
Gender biases in language generation systems are challenging to mitigate. One possible source for these biases is gender representation disparities in the training and evaluation data. Despite recent progress in documenting this problem and many attempts at mitigating it, we still lack shared methodology and tooling to report gender representation in large datasets. Such quantitative reporting will enable further mitigation, e.g., via data augmentation. This paper describes the Gender-Gap Pipeline (for Gender-Aware Polyglot Pipeline), an automatic pipeline to characterize gender representation in large-scale datasets for 55 languages. The pipeline uses a multilingual lexicon of gendered person-nouns to quantify the gender representation in text. We showcase it to report gender representation in WMT training data and development data for the News task, confirming that current data is skewed towards masculine representation. Having unbalanced datasets may indirectly optimize our systems towards outperforming one gender over the others. We suggest introducing our gender quantification pipeline in current datasets and, ideally, modifying them toward a balanced representation.
[ "Muller, Benjamin", "Alastruey, Belen", "Hansanti, Prangthip", "Kalbassi, Elahe", "Ropers, Christophe", "Smith, Eric", "Williams, Adina", "Zettlemoyer, Luke", "Andrews, Pierre", "Costa-juss{\\`a}, Marta R." ]
The Gender-GAP Pipeline: A Gender-Aware Polyglot Pipeline for Gender Characterisation in 55 Languages
wmt-1.48
2308.16871
[ "https://github.com/facebookresearch/responsiblenlp" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.49.bib
https://aclanthology.org/2023.wmt-1.49/
@inproceedings{marrese-taylor-etal-2023-towards, title = "Towards Better Evaluation for Formality-Controlled {E}nglish-{J}apanese Machine Translation", author = "Marrese-Taylor, Edison and Wang, Pin Chen and Matsuo, Yutaka", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.49", doi = "10.18653/v1/2023.wmt-1.49", pages = "551--560", abstract = "In this paper we propose a novel approach to automatically classify the level of formality in Japanese text, using three categories (formal, polite, and informal). We introduce a new dataset that combine manually-annotated sentences from existing resources, and formal sentences scrapped from the website of the House of Representatives and the House of Councilors of Japan. Based on our data, we propose a Transformer-based classification model for Japanese, which obtains state-of-the-art results in benchmark datasets. We further propose to utilize our classifier to study the effectiveness of prompting techniques for controlling the formality level of machine translation (MT) using Large Language Models (LLM). Our experimental setting includes a large selection of such models and is based on an En-{\textgreater}Ja parallel corpus specifically designed to test formality control in MT. Our results validate the robustness and effectiveness of our proposed approach and while also providing empirical evidence suggesting that prompting LLMs is a viable approach to control the formality level of En-{\textgreater}Ja MT using LLMs.", }
In this paper we propose a novel approach to automatically classify the level of formality in Japanese text, using three categories (formal, polite, and informal). We introduce a new dataset that combine manually-annotated sentences from existing resources, and formal sentences scrapped from the website of the House of Representatives and the House of Councilors of Japan. Based on our data, we propose a Transformer-based classification model for Japanese, which obtains state-of-the-art results in benchmark datasets. We further propose to utilize our classifier to study the effectiveness of prompting techniques for controlling the formality level of machine translation (MT) using Large Language Models (LLM). Our experimental setting includes a large selection of such models and is based on an En-{\textgreater}Ja parallel corpus specifically designed to test formality control in MT. Our results validate the robustness and effectiveness of our proposed approach and while also providing empirical evidence suggesting that prompting LLMs is a viable approach to control the formality level of En-{\textgreater}Ja MT using LLMs.
[ "Marrese-Taylor, Edison", "Wang, Pin Chen", "Matsuo, Yutaka" ]
Towards Better Evaluation for Formality-Controlled English-Japanese Machine Translation
wmt-1.49
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.50.bib
https://aclanthology.org/2023.wmt-1.50/
@inproceedings{peter-etal-2023-theres, title = "There{'}s No Data like Better Data: Using {QE} Metrics for {MT} Data Filtering", author = "Peter, Jan-Thorsten and Vilar, David and Deutsch, Daniel and Finkelstein, Mara and Juraska, Juraj and Freitag, Markus", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.50", doi = "10.18653/v1/2023.wmt-1.50", pages = "561--577", abstract = "Quality Estimation (QE), the evaluation of machine translation output without the need of explicit references, has seen big improvements in the last years with the use of neural metrics. In this paper we analyze the viability of using QE metrics for filtering out bad quality sentence pairs in the training data of neural machine translation systems (NMT). While most corpus filtering methods are focused on detecting noisy examples in collections of texts, usually huge amounts of web crawled data, QE models are trained to discriminate more fine-grained quality differences. We show that by selecting the highest quality sentence pairs in the training data, we can improve translation quality while reducing the training size by half. We also provide a detailed analysis of the filtering results, which highlights the differences between both approaches.", }
Quality Estimation (QE), the evaluation of machine translation output without the need of explicit references, has seen big improvements in the last years with the use of neural metrics. In this paper we analyze the viability of using QE metrics for filtering out bad quality sentence pairs in the training data of neural machine translation systems (NMT). While most corpus filtering methods are focused on detecting noisy examples in collections of texts, usually huge amounts of web crawled data, QE models are trained to discriminate more fine-grained quality differences. We show that by selecting the highest quality sentence pairs in the training data, we can improve translation quality while reducing the training size by half. We also provide a detailed analysis of the filtering results, which highlights the differences between both approaches.
[ "Peter, Jan-Thorsten", "Vilar, David", "Deutsch, Daniel", "Finkelstein, Mara", "Juraska, Juraj", "Freitag, Markus" ]
There's No Data like Better Data: Using QE Metrics for MT Data Filtering
wmt-1.50
2311.05350
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.51.bib
https://aclanthology.org/2023.wmt-1.51/
@inproceedings{freitag-etal-2023-results, title = "Results of {WMT}23 Metrics Shared Task: Metrics Might Be Guilty but References Are Not Innocent", author = "Freitag, Markus and Mathur, Nitika and Lo, Chi-kiu and Avramidis, Eleftherios and Rei, Ricardo and Thompson, Brian and Kocmi, Tom and Blain, Frederic and Deutsch, Daniel and Stewart, Craig and Zerva, Chrysoula and Castilho, Sheila and Lavie, Alon and Foster, George", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.51", doi = "10.18653/v1/2023.wmt-1.51", pages = "578--628", abstract = "This paper presents the results of the WMT23 Metrics Shared Task. Participants submitting automatic MT evaluation metrics were asked to score the outputs of the translation systems competing in the WMT23 News Translation Task. All metrics were evaluated on how well they correlate with human ratings at the system and segment level. Similar to last year, we acquired our own human ratings based on expert-based human evaluation via Multidimensional Quality Metrics (MQM). Following last year{'}s success, we also included a challenge set subtask, where participants had to create contrastive test suites for evaluating metrics{'} ability to capture and penalise specific types of translation errors. Furthermore, we improved our meta-evaluation procedure by considering fewer tasks and calculating a global score by weighted averaging across the various tasks. We present an extensive analysis on how well metrics perform on three language pairs: Chinese-English, Hebrew-English on the sentence-level and English-German on the paragraph-level. The results strongly confirm the results reported last year, that neural-based metrics are significantly better than non-neural metrics in their levels of correlation with human judgments. Further, we investigate the impact of bad reference translations on the correlations of metrics with human judgment. We present a novel approach for generating synthetic reference translations based on the collection of MT system outputs and their corresponding MQM ratings, which has the potential to mitigate bad reference issues we observed this year for some language pairs. Finally, we also study the connections between the magnitude of metric differences and their expected significance in human evaluation, which should help the community to better understand and adopt new metrics.", }
This paper presents the results of the WMT23 Metrics Shared Task. Participants submitting automatic MT evaluation metrics were asked to score the outputs of the translation systems competing in the WMT23 News Translation Task. All metrics were evaluated on how well they correlate with human ratings at the system and segment level. Similar to last year, we acquired our own human ratings based on expert-based human evaluation via Multidimensional Quality Metrics (MQM). Following last year{'}s success, we also included a challenge set subtask, where participants had to create contrastive test suites for evaluating metrics{'} ability to capture and penalise specific types of translation errors. Furthermore, we improved our meta-evaluation procedure by considering fewer tasks and calculating a global score by weighted averaging across the various tasks. We present an extensive analysis on how well metrics perform on three language pairs: Chinese-English, Hebrew-English on the sentence-level and English-German on the paragraph-level. The results strongly confirm the results reported last year, that neural-based metrics are significantly better than non-neural metrics in their levels of correlation with human judgments. Further, we investigate the impact of bad reference translations on the correlations of metrics with human judgment. We present a novel approach for generating synthetic reference translations based on the collection of MT system outputs and their corresponding MQM ratings, which has the potential to mitigate bad reference issues we observed this year for some language pairs. Finally, we also study the connections between the magnitude of metric differences and their expected significance in human evaluation, which should help the community to better understand and adopt new metrics.
[ "Freitag, Markus", "Mathur, Nitika", "Lo, Chi-kiu", "Avramidis, Eleftherios", "Rei, Ricardo", "Thompson, Brian", "Kocmi, Tom", "Blain, Frederic", "Deutsch, Daniel", "Stewart, Craig", "Zerva, Chrysoula", "Castilho, Sheila", "Lavie, Alon", "Foster, George" ]
Results of WMT23 Metrics Shared Task: Metrics Might Be Guilty but References Are Not Innocent
wmt-1.51
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.52.bib
https://aclanthology.org/2023.wmt-1.52/
@inproceedings{blain-etal-2023-findings, title = "Findings of the {WMT} 2023 Shared Task on Quality Estimation", author = "Blain, Frederic and Zerva, Chrysoula and Rei, Ricardo and Guerreiro, Nuno M. and Kanojia, Diptesh and C. de Souza, Jos{\'e} G. and Silva, Beatriz and Vaz, T{\^a}nia and Jingxuan, Yan and Azadi, Fatemeh and Orasan, Constantin and Martins, Andr{\'e}", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.52", doi = "10.18653/v1/2023.wmt-1.52", pages = "629--653", abstract = "We report the results of the WMT 2023 shared task on Quality Estimation, in which the challenge is to predict the quality of the output of neural machine translation systems at the word and sentence levels, without access to reference translations. This edition introduces a few novel aspects and extensions that aim to enable more fine-grained, and explainable quality estimation approaches. We introduce an updated quality annotation scheme using Multidimensional Quality Metrics to obtain sentence- and word-level quality scores for three language pairs. We also extend the provided data to new language pairs: we specifically target low-resource languages and provide training, development and test data for English-Hindi, English-Tamil, English-Telegu and English-Gujarati as well as a zero-shot test-set for English-Farsi. Further, we introduce a novel fine-grained error prediction task aspiring to motivate research towards more detailed quality predictions.", }
We report the results of the WMT 2023 shared task on Quality Estimation, in which the challenge is to predict the quality of the output of neural machine translation systems at the word and sentence levels, without access to reference translations. This edition introduces a few novel aspects and extensions that aim to enable more fine-grained, and explainable quality estimation approaches. We introduce an updated quality annotation scheme using Multidimensional Quality Metrics to obtain sentence- and word-level quality scores for three language pairs. We also extend the provided data to new language pairs: we specifically target low-resource languages and provide training, development and test data for English-Hindi, English-Tamil, English-Telegu and English-Gujarati as well as a zero-shot test-set for English-Farsi. Further, we introduce a novel fine-grained error prediction task aspiring to motivate research towards more detailed quality predictions.
[ "Blain, Frederic", "Zerva, Chrysoula", "Rei, Ricardo", "Guerreiro, Nuno M.", "Kanojia, Diptesh", "C. de Souza, Jos{\\'e} G.", "Silva, Beatriz", "Vaz, T{\\^a}nia", "Jingxuan, Yan", "Azadi, Fatemeh", "Orasan, Constantin", "Martins, Andr{\\'e}" ]
Findings of the WMT 2023 Shared Task on Quality Estimation
wmt-1.52
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.53.bib
https://aclanthology.org/2023.wmt-1.53/
@inproceedings{liu-etal-2023-findings, title = "Findings of the Word-Level {A}uto{C}ompletion Shared Task in {WMT} 2023", author = "Liu, Lemao and Casacuberta, Francisco and Foster, George and Huang, Guoping and Koehn, Philipp and Kovacs, Geza and Shi, Shuming and Watanabe, Taro and Zong, Chengqing", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.53", doi = "10.18653/v1/2023.wmt-1.53", pages = "654--662", abstract = "This paper presents the overview of the second Word-Level autocompletion (WLAC) shared task for computer-aided translation, which aims to automatically complete a target word given a translation context including a human typed character sequence. We largely adhere to the settings of the previous round of the shared task, but with two main differences: 1) The typed character sequence is obtained from the typing process of human translators to demonstrate system performance under real-world scenarios when preparing some type of testing examples; 2) We conduct a thorough analysis on the results of the submitted systems from three perspectives. From the experimental results, we observe that translation tasks are helpful to improve the performance of WLAC models. Additionally, our further analysis shows that the semantic error accounts for a significant portion of all errors, and thus it would be promising to take this type of errors into account in future.", }
This paper presents the overview of the second Word-Level autocompletion (WLAC) shared task for computer-aided translation, which aims to automatically complete a target word given a translation context including a human typed character sequence. We largely adhere to the settings of the previous round of the shared task, but with two main differences: 1) The typed character sequence is obtained from the typing process of human translators to demonstrate system performance under real-world scenarios when preparing some type of testing examples; 2) We conduct a thorough analysis on the results of the submitted systems from three perspectives. From the experimental results, we observe that translation tasks are helpful to improve the performance of WLAC models. Additionally, our further analysis shows that the semantic error accounts for a significant portion of all errors, and thus it would be promising to take this type of errors into account in future.
[ "Liu, Lemao", "Casacuberta, Francisco", "Foster, George", "Huang, Guoping", "Koehn, Philipp", "Kovacs, Geza", "Shi, Shuming", "Watanabe, Taro", "Zong, Chengqing" ]
Findings of the Word-Level AutoCompletion Shared Task in WMT 2023
wmt-1.53
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.54.bib
https://aclanthology.org/2023.wmt-1.54/
@inproceedings{semenov-etal-2023-findings, title = "Findings of the {WMT} 2023 Shared Task on Machine Translation with Terminologies", author = "Semenov, Kirill and Zouhar, Vil{\'e}m and Kocmi, Tom and Zhang, Dongdong and Zhou, Wangchunshu and Jiang, Yuchen Eleanor", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.54", doi = "10.18653/v1/2023.wmt-1.54", pages = "663--671", abstract = "The WMT 2023 Terminology Shared Task investigates progress in machine translation of texts with specialized vocabulary. The participants were given the source text and segment-level terminology dictionaries for three language pairs: Chinese→English, English→Czech, and German→English. We evaluate 21 submissions from 7 teams on two main criteria: general translation quality and the effectiveness of translating specialized terminology. Systems took varied approaches {---} incorporating terminology at inference time or weakly supervised training that uses terminology access. While incorporating terminology dictionaries leads to improvement in the translation quality, incorporating an equal amount of information from the reference leads to similar results. This challenges the position of terminologies being the crux of meaning in translation, it can also be explained by inadequate metrics which are not terminology-centric.", }
The WMT 2023 Terminology Shared Task investigates progress in machine translation of texts with specialized vocabulary. The participants were given the source text and segment-level terminology dictionaries for three language pairs: Chinese→English, English→Czech, and German→English. We evaluate 21 submissions from 7 teams on two main criteria: general translation quality and the effectiveness of translating specialized terminology. Systems took varied approaches {---} incorporating terminology at inference time or weakly supervised training that uses terminology access. While incorporating terminology dictionaries leads to improvement in the translation quality, incorporating an equal amount of information from the reference leads to similar results. This challenges the position of terminologies being the crux of meaning in translation, it can also be explained by inadequate metrics which are not terminology-centric.
[ "Semenov, Kirill", "Zouhar, Vil{\\'e}m", "Kocmi, Tom", "Zhang, Dongdong", "Zhou, Wangchunshu", "Jiang, Yuchen Eleanor" ]
Findings of the WMT 2023 Shared Task on Machine Translation with Terminologies
wmt-1.54
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.55.bib
https://aclanthology.org/2023.wmt-1.55/
@inproceedings{bhattacharyya-etal-2023-findings, title = "Findings of the {WMT} 2023 Shared Task on Automatic Post-Editing", author = "Bhattacharyya, Pushpak and Chatterjee, Rajen and Freitag, Markus and Kanojia, Diptesh and Negri, Matteo and Turchi, Marco", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.55", doi = "10.18653/v1/2023.wmt-1.55", pages = "672--681", abstract = "We present the results from the 9th round of the WMT shared task on MT Automatic Post-Editing, which consists of automatically correcting the output of a {``}black-box{''} machine translation system by learning from human corrections. Like last year, the task focused on English→Marathi, with data coming from multiple domains (healthcare, tourism, and general/news). Despite the consistent task framework, this year{'}s data proved to be extremely challenging. As a matter of fact, none of the official submissions from the participating teams succeeded in improving the quality of the already high-level initial translations (with baseline TER and BLEU scores of 26.6 and 70.66, respectively). Only one run, accepted as a {``}late{''} submission, achieved automatic evaluation scores that exceeded the baseline.", }
We present the results from the 9th round of the WMT shared task on MT Automatic Post-Editing, which consists of automatically correcting the output of a {``}black-box{''} machine translation system by learning from human corrections. Like last year, the task focused on English→Marathi, with data coming from multiple domains (healthcare, tourism, and general/news). Despite the consistent task framework, this year{'}s data proved to be extremely challenging. As a matter of fact, none of the official submissions from the participating teams succeeded in improving the quality of the already high-level initial translations (with baseline TER and BLEU scores of 26.6 and 70.66, respectively). Only one run, accepted as a {``}late{''} submission, achieved automatic evaluation scores that exceeded the baseline.
[ "Bhattacharyya, Pushpak", "Chatterjee, Rajen", "Freitag, Markus", "Kanojia, Diptesh", "Negri, Matteo", "Turchi, Marco" ]
Findings of the WMT 2023 Shared Task on Automatic Post-Editing
wmt-1.55
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.56.bib
https://aclanthology.org/2023.wmt-1.56/
@inproceedings{pal-etal-2023-findings, title = "Findings of the {WMT} 2023 Shared Task on Low-Resource {I}ndic Language Translation", author = "Pal, Santanu and Pakray, Partha and Laskar, Sahinur Rahman and Laitonjam, Lenin and Khenglawt, Vanlalmuansangi and Warjri, Sunita and Dadure, Pankaj Kundan and Dash, Sandeep Kumar", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.56", doi = "10.18653/v1/2023.wmt-1.56", pages = "682--694", abstract = "This paper presents the results of the low-resource Indic language translation task organized alongside the Eighth Conference on Machine Translation (WMT) 2023. In this task, participants were asked to build machine translation systems for any of four language pairs, namely, English-Assamese, English-Mizo, English-Khasi, and English-Manipuri. For this task, the IndicNE-Corp1.0 dataset is released, which consists of parallel and monolingual corpora for northeastern Indic languages such as Assamese, Mizo, Khasi, and Manipuri. The evaluation will be carried out using automatic evaluation metrics (BLEU, TER, RIBES, COMET, ChrF) and human evaluation.", }
This paper presents the results of the low-resource Indic language translation task organized alongside the Eighth Conference on Machine Translation (WMT) 2023. In this task, participants were asked to build machine translation systems for any of four language pairs, namely, English-Assamese, English-Mizo, English-Khasi, and English-Manipuri. For this task, the IndicNE-Corp1.0 dataset is released, which consists of parallel and monolingual corpora for northeastern Indic languages such as Assamese, Mizo, Khasi, and Manipuri. The evaluation will be carried out using automatic evaluation metrics (BLEU, TER, RIBES, COMET, ChrF) and human evaluation.
[ "Pal, Santanu", "Pakray, Partha", "Laskar, Sahinur Rahman", "Laitonjam, Lenin", "Khenglawt, Vanlalmuansangi", "Warjri, Sunita", "Dadure, Pankaj Kundan", "Dash, S", "eep Kumar" ]
Findings of the WMT 2023 Shared Task on Low-Resource Indic Language Translation
wmt-1.56
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.57.bib
https://aclanthology.org/2023.wmt-1.57/
@inproceedings{amrhein-etal-2023-aces, title = "{ACES}: Translation Accuracy Challenge Sets at {WMT} 2023", author = "Amrhein, Chantal and Moghe, Nikita and Guillou, Liane", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.57", doi = "10.18653/v1/2023.wmt-1.57", pages = "695--712", abstract = "We benchmark the performance of segment-level metrics submitted to WMT 2023 using the ACES Challenge Set (Amrhein et al., 2022). The challenge set consists of 36K examples representing challenges from 68 phenomena and covering 146 language pairs. The phenomena range from simple perturbations at the word/character level to more complex errors based on discourse and real-world knowledge. For each metric, we provide a detailed profile of performance over a range of error categories as well as an overall ACES-Score for quick comparison. We also measure the incremental performance of the metrics submitted to both WMT 2023 and 2022. We find that 1) there is no clear winner among the metrics submitted to WMT 2023, and 2) performance change between the 2023 and 2022 versions of the metrics is highly variable. Our recommendations are similar to those from WMT 2022. Metric developers should focus on: building ensembles of metrics from different design families, developing metrics that pay more attention to the source and rely less on surface-level overlap, and carefully determining the influence of multilingual embeddings on MT evaluation.", }
We benchmark the performance of segment-level metrics submitted to WMT 2023 using the ACES Challenge Set (Amrhein et al., 2022). The challenge set consists of 36K examples representing challenges from 68 phenomena and covering 146 language pairs. The phenomena range from simple perturbations at the word/character level to more complex errors based on discourse and real-world knowledge. For each metric, we provide a detailed profile of performance over a range of error categories as well as an overall ACES-Score for quick comparison. We also measure the incremental performance of the metrics submitted to both WMT 2023 and 2022. We find that 1) there is no clear winner among the metrics submitted to WMT 2023, and 2) performance change between the 2023 and 2022 versions of the metrics is highly variable. Our recommendations are similar to those from WMT 2022. Metric developers should focus on: building ensembles of metrics from different design families, developing metrics that pay more attention to the source and rely less on surface-level overlap, and carefully determining the influence of multilingual embeddings on MT evaluation.
[ "Amrhein, Chantal", "Moghe, Nikita", "Guillou, Liane" ]
ACES: Translation Accuracy Challenge Sets at WMT 2023
wmt-1.57
2311.01153
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.58.bib
https://aclanthology.org/2023.wmt-1.58/
@inproceedings{avramidis-etal-2023-challenging, title = "Challenging the State-of-the-art Machine Translation Metrics from a Linguistic Perspective", author = {Avramidis, Eleftherios and Manakhimova, Shushen and Macketanz, Vivien and M{\"o}ller, Sebastian}, editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.58", doi = "10.18653/v1/2023.wmt-1.58", pages = "713--729", abstract = "We employ a linguistically motivated challenge set in order to evaluate the state-of-the-art machine translation metrics submitted to the Metrics Shared Task of the 8th Conference for Machine Translation. The challenge set includes about 21,000 items extracted from 155 machine translation systems for three language directions, covering more than 100 linguistically-motivated phenomena organized in 14 categories. The metrics that have the best performance with regard to our linguistically motivated analysis are the Cometoid22-wmt23 (a trained metric based on distillation) for German-English and MetricX-23-c (based on a fine-tuned mT5 encoder-decoder language model) for English-German and English-Russian. Some of the most difficult phenomena are passive voice for German-English, named entities, terminology and measurement units for English-German, and focus particles, adverbial clause and stripping for English-Russian.", }
We employ a linguistically motivated challenge set in order to evaluate the state-of-the-art machine translation metrics submitted to the Metrics Shared Task of the 8th Conference for Machine Translation. The challenge set includes about 21,000 items extracted from 155 machine translation systems for three language directions, covering more than 100 linguistically-motivated phenomena organized in 14 categories. The metrics that have the best performance with regard to our linguistically motivated analysis are the Cometoid22-wmt23 (a trained metric based on distillation) for German-English and MetricX-23-c (based on a fine-tuned mT5 encoder-decoder language model) for English-German and English-Russian. Some of the most difficult phenomena are passive voice for German-English, named entities, terminology and measurement units for English-German, and focus particles, adverbial clause and stripping for English-Russian.
[ "Avramidis, Eleftherios", "Manakhimova, Shushen", "Macketanz, Vivien", "M{\\\"o}ller, Sebastian" ]
Challenging the State-of-the-art Machine Translation Metrics from a Linguistic Perspective
wmt-1.58
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.59.bib
https://aclanthology.org/2023.wmt-1.59/
@inproceedings{dreano-etal-2023-tokengram, title = "{T}okengram{\_}{F}, a Fast and Accurate Token-based chr{F}++ Derivative", author = {Dreano, S{\"o}ren and Molloy, Derek and Murphy, Noel}, editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.59", doi = "10.18653/v1/2023.wmt-1.59", pages = "730--737", abstract = "Tokengram{\_}F is an F-score-based evaluation metric for Machine Translation that is heavily in- spired by chrF++ and can act as a more accurate replacement. By replacing word n-grams with n-grams obtained from tokenization algorithms, tokengram{\_}F better captures similarities between words.", }
Tokengram{\_}F is an F-score-based evaluation metric for Machine Translation that is heavily in- spired by chrF++ and can act as a more accurate replacement. By replacing word n-grams with n-grams obtained from tokenization algorithms, tokengram{\_}F better captures similarities between words.
[ "Dreano, S{\\\"o}ren", "Molloy, Derek", "Murphy, Noel" ]
Tokengram_F, a Fast and Accurate Token-based chrF++ Derivative
wmt-1.59
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.60.bib
https://aclanthology.org/2023.wmt-1.60/
@inproceedings{dreano-etal-2023-embed, title = "{E}mbed{\_}{L}lama: Using {LLM} Embeddings for the Metrics Shared Task", author = {Dreano, S{\"o}ren and Molloy, Derek and Murphy, Noel}, editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.60", doi = "10.18653/v1/2023.wmt-1.60", pages = "738--745", abstract = "Embed{\_}llama is an assessment metric for language translation that hinges upon the utilization of the recently introduced Llama 2 Large Language Model (LLM), specifically, focusing on its embedding layer, with the aim of transforming sentences into a vector space that establishes connections between geometric and semantic proximities", }
Embed{\_}llama is an assessment metric for language translation that hinges upon the utilization of the recently introduced Llama 2 Large Language Model (LLM), specifically, focusing on its embedding layer, with the aim of transforming sentences into a vector space that establishes connections between geometric and semantic proximities
[ "Dreano, S{\\\"o}ren", "Molloy, Derek", "Murphy, Noel" ]
Embed_Llama: Using LLM Embeddings for the Metrics Shared Task
wmt-1.60
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.61.bib
https://aclanthology.org/2023.wmt-1.61/
@inproceedings{elnokrashy-kocmi-2023-ebleu, title = "e{BLEU}: Unexpectedly Good Machine Translation Evaluation Using Simple Word Embeddings", author = "ElNokrashy, Muhammad and Kocmi, Tom", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.61", doi = "10.18653/v1/2023.wmt-1.61", pages = "746--750", abstract = "We propose eBLEU, a metric inspired by BLEU metric that uses embedding similarities instead of string matches. We introduce meaning diffusion vectors to enable matching n-grams of semantically similar words in a BLEU-like algorithm, using efficient, non-contextual word embeddings like fastText. On WMT23 data, eBLEU beats BLEU and ChrF by around 3.8{\%} system-level score, approaching BERTScore at −0.9{\%} absolute difference. In WMT22 scenarios, eBLEU outperforms f101spBLEU and ChrF in MQM by 2.2{\%}−3.6{\%}. Curiously, on MTurk evaluations, eBLEU surpasses past methods by 3.9{\%}−8.2{\%} (f200spBLEU, COMET-22). eBLEU presents an interesting middle-ground between traditional metrics and pretrained metrics.", }
We propose eBLEU, a metric inspired by BLEU metric that uses embedding similarities instead of string matches. We introduce meaning diffusion vectors to enable matching n-grams of semantically similar words in a BLEU-like algorithm, using efficient, non-contextual word embeddings like fastText. On WMT23 data, eBLEU beats BLEU and ChrF by around 3.8{\%} system-level score, approaching BERTScore at −0.9{\%} absolute difference. In WMT22 scenarios, eBLEU outperforms f101spBLEU and ChrF in MQM by 2.2{\%}−3.6{\%}. Curiously, on MTurk evaluations, eBLEU surpasses past methods by 3.9{\%}−8.2{\%} (f200spBLEU, COMET-22). eBLEU presents an interesting middle-ground between traditional metrics and pretrained metrics.
[ "ElNokrashy, Muhammad", "Kocmi, Tom" ]
eBLEU: Unexpectedly Good Machine Translation Evaluation Using Simple Word Embeddings
wmt-1.61
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.62.bib
https://aclanthology.org/2023.wmt-1.62/
@inproceedings{gowda-etal-2023-cometoid, title = "Cometoid: Distilling Strong Reference-based Machine Translation Metrics into {E}ven Stronger Quality Estimation Metrics", author = "Gowda, Thamme and Kocmi, Tom and Junczys-Dowmunt, Marcin", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.62", doi = "10.18653/v1/2023.wmt-1.62", pages = "751--755", abstract = "This paper describes our submissions to the 2023 Conference on Machine Translation (WMT-23) Metrics shared task. Knowledge distillation is commonly used to create smaller student models that mimic larger teacher model while reducing the model size and hence inference cost in production. In this work, we apply knowledge distillation to machine translation evaluation metrics and distill existing reference-based teacher metrics into reference-free (quality estimation; QE) student metrics. We mainly focus on students of Unbabel{'}s COMET22 reference-based metric. When evaluating on the official WMT-22 Metrics evaluation task, our distilled Cometoid QE metrics outperform all other QE metrics on that set while matching or out-performing the reference-based teacher metric. Our metrics never see the human ground-truth scores directly {--} only the teacher metric was trained on human scores by its original creators. We also distill ChrF sentence-level scores into a neural QE metric and find that our reference-free (and fully human-score-free) student metric ChrFoid outperforms its teacher metric by over 7{\%} pairwise accuracy on the same WMT-22 task, rivaling other existing QE metrics.", }
This paper describes our submissions to the 2023 Conference on Machine Translation (WMT-23) Metrics shared task. Knowledge distillation is commonly used to create smaller student models that mimic larger teacher model while reducing the model size and hence inference cost in production. In this work, we apply knowledge distillation to machine translation evaluation metrics and distill existing reference-based teacher metrics into reference-free (quality estimation; QE) student metrics. We mainly focus on students of Unbabel{'}s COMET22 reference-based metric. When evaluating on the official WMT-22 Metrics evaluation task, our distilled Cometoid QE metrics outperform all other QE metrics on that set while matching or out-performing the reference-based teacher metric. Our metrics never see the human ground-truth scores directly {--} only the teacher metric was trained on human scores by its original creators. We also distill ChrF sentence-level scores into a neural QE metric and find that our reference-free (and fully human-score-free) student metric ChrFoid outperforms its teacher metric by over 7{\%} pairwise accuracy on the same WMT-22 task, rivaling other existing QE metrics.
[ "Gowda, Thamme", "Kocmi, Tom", "Junczys-Dowmunt, Marcin" ]
Cometoid: Distilling Strong Reference-based Machine Translation Metrics into Even Stronger Quality Estimation Metrics
wmt-1.62
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.63.bib
https://aclanthology.org/2023.wmt-1.63/
@inproceedings{juraska-etal-2023-metricx, title = "{M}etric{X}-23: The {G}oogle Submission to the {WMT} 2023 Metrics Shared Task", author = "Juraska, Juraj and Finkelstein, Mara and Deutsch, Daniel and Siddhant, Aditya and Mirzazadeh, Mehdi and Freitag, Markus", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.63", doi = "10.18653/v1/2023.wmt-1.63", pages = "756--767", abstract = "This report details the MetricX-23 submission to the WMT23 Metrics Shared Task and provides an overview of the experiments that informed which metrics were submitted. Our 3 submissions{---}each with a quality estimation (or reference-free) version{---}are all learned regression-based metrics that vary in the data used for training and which pretrained language model was used for initialization. We report results related to understanding (1) which supervised training data to use, (2) the impact of how the training labels are normalized, (3) the amount of synthetic training data to use, (4) how metric performance is related to model size, and (5) the effect of initializing the metrics with different pretrained language models. The most successful training recipe for MetricX employs two-stage fine-tuning on DA and MQM ratings, and includes synthetic training data. Finally, one important takeaway from our extensive experiments is that optimizing for both segment- and system-level performance at the same time is a challenging task.", }
This report details the MetricX-23 submission to the WMT23 Metrics Shared Task and provides an overview of the experiments that informed which metrics were submitted. Our 3 submissions{---}each with a quality estimation (or reference-free) version{---}are all learned regression-based metrics that vary in the data used for training and which pretrained language model was used for initialization. We report results related to understanding (1) which supervised training data to use, (2) the impact of how the training labels are normalized, (3) the amount of synthetic training data to use, (4) how metric performance is related to model size, and (5) the effect of initializing the metrics with different pretrained language models. The most successful training recipe for MetricX employs two-stage fine-tuning on DA and MQM ratings, and includes synthetic training data. Finally, one important takeaway from our extensive experiments is that optimizing for both segment- and system-level performance at the same time is a challenging task.
[ "Juraska, Juraj", "Finkelstein, Mara", "Deutsch, Daniel", "Siddhant, Aditya", "Mirzazadeh, Mehdi", "Freitag, Markus" ]
MetricX-23: The Google Submission to the WMT 2023 Metrics Shared Task
wmt-1.63
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.64.bib
https://aclanthology.org/2023.wmt-1.64/
@inproceedings{kocmi-federmann-2023-gemba, title = "{GEMBA}-{MQM}: Detecting Translation Quality Error Spans with {GPT}-4", author = "Kocmi, Tom and Federmann, Christian", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.64", doi = "10.18653/v1/2023.wmt-1.64", pages = "768--775", abstract = "This paper introduces GEMBA-MQM, a GPT-based evaluation metric designed to detect translation quality errors, specifically for the quality estimation setting without the need for human reference translations. Based on the power of large language models (LLM), GEMBA-MQM employs a fixed three-shot prompting technique, querying the GPT-4 model to mark error quality spans. Compared to previous works, our method has language-agnostic prompts, thus avoiding the need for manual prompt preparation for new languages. While preliminary results indicate that GEMBA-MQM achieves state-of-the-art accuracy for system ranking, we advise caution when using it in academic works to demonstrate improvements over other methods due to its dependence on the proprietary, black-box GPT model.", }
This paper introduces GEMBA-MQM, a GPT-based evaluation metric designed to detect translation quality errors, specifically for the quality estimation setting without the need for human reference translations. Based on the power of large language models (LLM), GEMBA-MQM employs a fixed three-shot prompting technique, querying the GPT-4 model to mark error quality spans. Compared to previous works, our method has language-agnostic prompts, thus avoiding the need for manual prompt preparation for new languages. While preliminary results indicate that GEMBA-MQM achieves state-of-the-art accuracy for system ranking, we advise caution when using it in academic works to demonstrate improvements over other methods due to its dependence on the proprietary, black-box GPT model.
[ "Kocmi, Tom", "Federmann, Christian" ]
GEMBA-MQM: Detecting Translation Quality Error Spans with GPT-4
wmt-1.64
2310.13988
[ "" ]
https://huggingface.co/papers/2310.13988
0
1
0
2
[]
[]
[]
1
Poster
https://aclanthology.org/2023.wmt-1.65.bib
https://aclanthology.org/2023.wmt-1.65/
@inproceedings{lo-etal-2023-metric, title = "Metric Score Landscape Challenge ({MSLC}23): Understanding Metrics{'} Performance on a Wider Landscape of Translation Quality", author = "Lo, Chi-kiu and Larkin, Samuel and Knowles, Rebecca", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.65", doi = "10.18653/v1/2023.wmt-1.65", pages = "776--799", abstract = "The Metric Score Landscape Challenge (MSLC23) dataset aims to gain insight into metric scores on a broader/wider landscape of machine translation (MT) quality. It provides a collection of low- to medium-quality MT output on the WMT23 general task test set. Together with the high quality systems submitted to the general task, this will enable better interpretation of metric scores across a range of different levels of translation quality. With this wider range of MT quality, we also visualize and analyze metric characteristics beyond just correlation.", }
The Metric Score Landscape Challenge (MSLC23) dataset aims to gain insight into metric scores on a broader/wider landscape of machine translation (MT) quality. It provides a collection of low- to medium-quality MT output on the WMT23 general task test set. Together with the high quality systems submitted to the general task, this will enable better interpretation of metric scores across a range of different levels of translation quality. With this wider range of MT quality, we also visualize and analyze metric characteristics beyond just correlation.
[ "Lo, Chi-kiu", "Larkin, Samuel", "Knowles, Rebecca" ]
Metric Score Landscape Challenge (MSLC23): Understanding Metrics' Performance on a Wider Landscape of Translation Quality
wmt-1.65
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.66.bib
https://aclanthology.org/2023.wmt-1.66/
@inproceedings{mukherjee-shrivastava-2023-mee4, title = "{MEE}4 and {XL}sim : {IIIT} {HYD}{'}s Submissions{'} for {WMT}23 Metrics Shared Task", author = "Mukherjee, Ananya and Shrivastava, Manish", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.66", doi = "10.18653/v1/2023.wmt-1.66", pages = "800--805", abstract = "This paper presents our contributions to the WMT2023 shared metrics task, consisting of two distinct evaluation approaches: a) Unsupervised Metric (MEE4) and b) Supervised Metric (XLSim). MEE4 represents an unsupervised, reference-based assessment metric that quantifies linguistic features, encompassing lexical, syntactic, semantic, morphological, and contextual similarities, leveraging embeddings. In contrast, XLsim is a supervised reference-based evaluation metric, employing a Siamese Architecture, which regresses on Direct Assessments (DA) from previous WMT News Translation shared tasks from 2017-2022. XLsim is trained using XLM-RoBERTa (base) on English-German reference and mt pairs with human scores.", }
This paper presents our contributions to the WMT2023 shared metrics task, consisting of two distinct evaluation approaches: a) Unsupervised Metric (MEE4) and b) Supervised Metric (XLSim). MEE4 represents an unsupervised, reference-based assessment metric that quantifies linguistic features, encompassing lexical, syntactic, semantic, morphological, and contextual similarities, leveraging embeddings. In contrast, XLsim is a supervised reference-based evaluation metric, employing a Siamese Architecture, which regresses on Direct Assessments (DA) from previous WMT News Translation shared tasks from 2017-2022. XLsim is trained using XLM-RoBERTa (base) on English-German reference and mt pairs with human scores.
[ "Mukherjee, Ananya", "Shrivastava, Manish" ]
MEE4 and XLsim : IIIT HYD's Submissions' for WMT23 Metrics Shared Task
wmt-1.66
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.67.bib
https://aclanthology.org/2023.wmt-1.67/
@inproceedings{naskar-etal-2023-quality, title = "Quality Estimation Using Minimum {B}ayes Risk", author = "Naskar, Subhajit and Deutsch, Daniel and Freitag, Markus", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.67", doi = "10.18653/v1/2023.wmt-1.67", pages = "806--811", abstract = "This report describes the Minimum Bayes Risk Quality Estimation (MBR-QE) submission to the Workshop on Machine Translation{'}s 2023 Metrics Shared Task. MBR decoding with neural utility metrics like BLEURT is known to be effective in generating high quality machine translations. We use the underlying technique of MBR decoding and develop an MBR based reference-free quality estimation metric. Our method uses an evaluator machine translation system and a reference-based utility metric (specifically BLEURT and MetricX) to calculate a quality estimation score of a model. We report results related to comparing different MBR configurations and utility metrics.", }
This report describes the Minimum Bayes Risk Quality Estimation (MBR-QE) submission to the Workshop on Machine Translation{'}s 2023 Metrics Shared Task. MBR decoding with neural utility metrics like BLEURT is known to be effective in generating high quality machine translations. We use the underlying technique of MBR decoding and develop an MBR based reference-free quality estimation metric. Our method uses an evaluator machine translation system and a reference-based utility metric (specifically BLEURT and MetricX) to calculate a quality estimation score of a model. We report results related to comparing different MBR configurations and utility metrics.
[ "Naskar, Subhajit", "Deutsch, Daniel", "Freitag, Markus" ]
Quality Estimation Using Minimum Bayes Risk
wmt-1.67
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.68.bib
https://aclanthology.org/2023.wmt-1.68/
@inproceedings{raunak-etal-2023-evaluating, title = "Evaluating Metrics for Document-context Evaluation in Machine Translation", author = "Raunak, Vikas and Kocmi, Tom and Post, Matt", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.68", doi = "10.18653/v1/2023.wmt-1.68", pages = "812--814", abstract = "We describe our submission of a new metric, SLIDE (Raunak et al., 2023), to the WMT 2023 metrics task. SLIDE is a reference-free quality-estimation metric that works by constructing a fixed sentence-length window over the documents in a test set, concatenating chunks and then sending them for scoring as a single unit by COMET (Rei et al, 2022). We find that SLIDE improves dramatically over its context-less counterpart on the two WMT22 evaluation campaigns (MQM and DA+SQM).", }
We describe our submission of a new metric, SLIDE (Raunak et al., 2023), to the WMT 2023 metrics task. SLIDE is a reference-free quality-estimation metric that works by constructing a fixed sentence-length window over the documents in a test set, concatenating chunks and then sending them for scoring as a single unit by COMET (Rei et al, 2022). We find that SLIDE improves dramatically over its context-less counterpart on the two WMT22 evaluation campaigns (MQM and DA+SQM).
[ "Raunak, Vikas", "Kocmi, Tom", "Post, Matt" ]
Evaluating Metrics for Document-context Evaluation in Machine Translation
wmt-1.68
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.69.bib
https://aclanthology.org/2023.wmt-1.69/
@inproceedings{viskov-etal-2023-semantically, title = "Semantically-Informed Regressive Encoder Score", author = "Viskov, Vasiliy and Kokush, George and Larionov, Daniil and Eger, Steffen and Panchenko, Alexander", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.69", doi = "10.18653/v1/2023.wmt-1.69", pages = "815--821", abstract = "Machine translation is natural language generation (NLG) problem of translating source text from one language to another. As every task in machine learning domain it requires to have evaluation metric. The most obvious one is human evaluation but it is expensive in case of money and time consumption. In last years with appearing of pretrained transformer architectures and large language models (LLMs) state-of-the-art results in automatic machine translation evaluation got a huge quality step in terms of correlation with expert assessment. We introduce MRE-Score, seMantically-informed Regression Encoder Score, the approach with constructing automatic machine translation evaluation system based on regression encoder and contrastive pretraining for downstream problem.", }
Machine translation is natural language generation (NLG) problem of translating source text from one language to another. As every task in machine learning domain it requires to have evaluation metric. The most obvious one is human evaluation but it is expensive in case of money and time consumption. In last years with appearing of pretrained transformer architectures and large language models (LLMs) state-of-the-art results in automatic machine translation evaluation got a huge quality step in terms of correlation with expert assessment. We introduce MRE-Score, seMantically-informed Regression Encoder Score, the approach with constructing automatic machine translation evaluation system based on regression encoder and contrastive pretraining for downstream problem.
[ "Viskov, Vasiliy", "Kokush, George", "Larionov, Daniil", "Eger, Steffen", "Panchenko, Alex", "er" ]
Semantically-Informed Regressive Encoder Score
wmt-1.69
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.70.bib
https://aclanthology.org/2023.wmt-1.70/
@inproceedings{wu-etal-2023-empowering, title = "Empowering a Metric with {LLM}-assisted Named Entity Annotation: {HW}-{TSC}{'}s Submission to the {WMT}23 Metrics Shared Task", author = "Wu, Zhanglin and Liu, Yilun and Zhang, Min and Zhao, Xiaofeng and Zhu, Junhao and Zhu, Ming and Qiao, Xiaosong and Zhang, Jingfei and Miaomiao, Ma and Yanqing, Zhao and Peng, Song and Tao, Shimin and Yang, Hao and Jiang, Yanfei", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.70", doi = "10.18653/v1/2023.wmt-1.70", pages = "822--828", abstract = "This paper presents the submission of Huawei Translation Service Center (HW-TSC) to the WMT23 metrics shared task, in which we submit two metrics: KG-BERTScore and HWTSC-EE-Metric. Among them, KG-BERTScore is our primary submission for the reference-free metric, which can provide both segment-level and system-level scoring. While HWTSC-EE-Metric is our primary submission for the reference-based metric, which can only provide system-level scoring. Overall, our metrics show relatively high correlations with MQM scores on the metrics tasks of previous years. Especially on system-level scoring tasks, our metrics achieve new state-of-the-art in many language pairs.", }
This paper presents the submission of Huawei Translation Service Center (HW-TSC) to the WMT23 metrics shared task, in which we submit two metrics: KG-BERTScore and HWTSC-EE-Metric. Among them, KG-BERTScore is our primary submission for the reference-free metric, which can provide both segment-level and system-level scoring. While HWTSC-EE-Metric is our primary submission for the reference-based metric, which can only provide system-level scoring. Overall, our metrics show relatively high correlations with MQM scores on the metrics tasks of previous years. Especially on system-level scoring tasks, our metrics achieve new state-of-the-art in many language pairs.
[ "Wu, Zhanglin", "Liu, Yilun", "Zhang, Min", "Zhao, Xiaofeng", "Zhu, Junhao", "Zhu, Ming", "Qiao, Xiaosong", "Zhang, Jingfei", "Miaomiao, Ma", "Yanqing, Zhao", "Peng, Song", "Tao, Shimin", "Yang, Hao", "Jiang, Yanfei" ]
Empowering a Metric with LLM-assisted Named Entity Annotation: HW-TSC's Submission to the WMT23 Metrics Shared Task
wmt-1.70
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.71.bib
https://aclanthology.org/2023.wmt-1.71/
@inproceedings{geng-etal-2023-unify, title = "Unify Word-level and Span-level Tasks: {NJUNLP}{'}s Participation for the {WMT}2023 Quality Estimation Shared Task", author = "Geng, Xiang and Lai, Zhejian and Zhang, Yu and Tao, Shimin and Yang, Hao and Chen, Jiajun and Huang, Shujian", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.71", doi = "10.18653/v1/2023.wmt-1.71", pages = "829--834", abstract = "We introduce the submissions of the NJUNLP team to the WMT 2023 Quality Estimation (QE) shared task. Our team submitted predictions for the English-German language pair on all two sub-tasks: (i) sentence- and word-level quality prediction; and (ii) fine-grained error span detection. This year, we further explore pseudo data methods for QE based on NJUQE framework (https://github.com/NJUNLP/njuqe). We generate pseudo MQM data using parallel data from the WMT translation task. We pre-train the XLMR large model on pseudo QE data, then fine-tune it on real QE data. At both stages, we jointly learn sentence-level scores and word-level tags. Empirically, we conduct experiments to find the key hyper-parameters that improve the performance. Technically, we propose a simple method that covert the word-level outputs to fine-grained error span results. Overall, our models achieved the best results in English-German for both word-level and fine-grained error span detection sub-tasks by a considerable margin.", }
We introduce the submissions of the NJUNLP team to the WMT 2023 Quality Estimation (QE) shared task. Our team submitted predictions for the English-German language pair on all two sub-tasks: (i) sentence- and word-level quality prediction; and (ii) fine-grained error span detection. This year, we further explore pseudo data methods for QE based on NJUQE framework (https://github.com/NJUNLP/njuqe). We generate pseudo MQM data using parallel data from the WMT translation task. We pre-train the XLMR large model on pseudo QE data, then fine-tune it on real QE data. At both stages, we jointly learn sentence-level scores and word-level tags. Empirically, we conduct experiments to find the key hyper-parameters that improve the performance. Technically, we propose a simple method that covert the word-level outputs to fine-grained error span results. Overall, our models achieved the best results in English-German for both word-level and fine-grained error span detection sub-tasks by a considerable margin.
[ "Geng, Xiang", "Lai, Zhejian", "Zhang, Yu", "Tao, Shimin", "Yang, Hao", "Chen, Jiajun", "Huang, Shujian" ]
Unify Word-level and Span-level Tasks: NJUNLP's Participation for the WMT2023 Quality Estimation Shared Task
wmt-1.71
2309.13230
[ "https://github.com/njunlp/njuqe" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.72.bib
https://aclanthology.org/2023.wmt-1.72/
@inproceedings{li-etal-2023-hw-tsc, title = "{HW}-{TSC} 2023 Submission for the Quality Estimation Shared Task", author = "Li, Yuang and Su, Chang and Zhu, Ming and Piao, Mengyao and Lyu, Xinglin and Zhang, Min and Yang, Hao", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.72", doi = "10.18653/v1/2023.wmt-1.72", pages = "835--840", abstract = "Quality estimation (QE) is an essential technique to assess machine translation quality without reference translations. In this paper, we focus on Huawei Translation Services Center{'}s (HW-TSC{'}s) submission to the sentence-level QE shared task, named Ensemble-CrossQE. Our system uses CrossQE, the same model architecture as our last year{'}s submission, which consists of a multilingual base model and a task-specific downstream layer. The input is the concatenation of the source and the translated sentences. To enhance the performance, we finetuned and ensembled multiple base models such as XLM-R, InfoXLM, RemBERT and CometKiwi. Moreover, we introduce a new corruption-based data augmentation method, which generates deletion, substitution and insertion errors in the original translation and uses a reference-based QE model to obtain pseudo scores. Results show that our system achieves impressive performance on sentence-level QE test sets and ranked the first place for three language pairs: English-Hindi, English-Tamil and English-Telegu. In addition, we participated in the error span detection task. The submitted model outperforms the baseline on Chinese-English and Hebrew-English language pairs.", }
Quality estimation (QE) is an essential technique to assess machine translation quality without reference translations. In this paper, we focus on Huawei Translation Services Center{'}s (HW-TSC{'}s) submission to the sentence-level QE shared task, named Ensemble-CrossQE. Our system uses CrossQE, the same model architecture as our last year{'}s submission, which consists of a multilingual base model and a task-specific downstream layer. The input is the concatenation of the source and the translated sentences. To enhance the performance, we finetuned and ensembled multiple base models such as XLM-R, InfoXLM, RemBERT and CometKiwi. Moreover, we introduce a new corruption-based data augmentation method, which generates deletion, substitution and insertion errors in the original translation and uses a reference-based QE model to obtain pseudo scores. Results show that our system achieves impressive performance on sentence-level QE test sets and ranked the first place for three language pairs: English-Hindi, English-Tamil and English-Telegu. In addition, we participated in the error span detection task. The submitted model outperforms the baseline on Chinese-English and Hebrew-English language pairs.
[ "Li, Yuang", "Su, Chang", "Zhu, Ming", "Piao, Mengyao", "Lyu, Xinglin", "Zhang, Min", "Yang, Hao" ]
HW-TSC 2023 Submission for the Quality Estimation Shared Task
wmt-1.72
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.73.bib
https://aclanthology.org/2023.wmt-1.73/
@inproceedings{rei-etal-2023-scaling, title = "Scaling up {C}omet{K}iwi: Unbabel-{IST} 2023 Submission for the Quality Estimation Shared Task", author = "Rei, Ricardo and Guerreiro, Nuno M. and Pombal, Jos{\~A}{\copyright} and van Stigt, Daan and Treviso, Marcos and Coheur, Luisa and C. de Souza, Jos{\'e} G. and Martins, Andr{\'e}", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.73", doi = "10.18653/v1/2023.wmt-1.73", pages = "841--848", abstract = "We present the joint contribution of Unbabel and Instituto Superior T{\'e}cnico to the WMT 2023 Shared Task on Quality Estimation (QE). Our team participated on all tasks: Sentence- and Word-level Quality Prediction and Fine-grained error span detection. For all tasks we build on the CometKiwi model (rei et al. 2022). Our multilingual approaches are ranked first for all tasks, reaching state-of-the-art performance for quality estimation at word-, span- and sentence-level granularity. Compared to the previous state-of-the-art, CometKiwi, we show large improvements in correlation with human judgements (up to 10 Spearman points) and surpassing the second-best multilingual submission with up to 3.8 absolute points.", }
We present the joint contribution of Unbabel and Instituto Superior T{\'e}cnico to the WMT 2023 Shared Task on Quality Estimation (QE). Our team participated on all tasks: Sentence- and Word-level Quality Prediction and Fine-grained error span detection. For all tasks we build on the CometKiwi model (rei et al. 2022). Our multilingual approaches are ranked first for all tasks, reaching state-of-the-art performance for quality estimation at word-, span- and sentence-level granularity. Compared to the previous state-of-the-art, CometKiwi, we show large improvements in correlation with human judgements (up to 10 Spearman points) and surpassing the second-best multilingual submission with up to 3.8 absolute points.
[ "Rei, Ricardo", "Guerreiro, Nuno M.", "Pombal, Jos{\\~A}{\\copyright}", "van Stigt, Daan", "Treviso, Marcos", "Coheur, Luisa", "C. de Souza, Jos{\\'e} G.", "Martins, Andr{\\'e}" ]
Scaling up CometKiwi: Unbabel-IST 2023 Submission for the Quality Estimation Shared Task
wmt-1.73
2309.11925
[ "" ]
https://huggingface.co/papers/2309.11925
1
0
0
8
[ "Unbabel/wmt23-cometkiwi-da-xl", "Unbabel/wmt23-cometkiwi-da-xxl" ]
[]
[]
1
Poster
https://aclanthology.org/2023.wmt-1.74.bib
https://aclanthology.org/2023.wmt-1.74/
@inproceedings{sindhujan-etal-2023-surreyai, title = "{S}urrey{AI} 2023 Submission for the Quality Estimation Shared Task", author = "Sindhujan, Archchana and Kanojia, Diptesh and Orasan, Constantin and Ranasinghe, Tharindu", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.74", doi = "10.18653/v1/2023.wmt-1.74", pages = "849--855", abstract = "Quality Estimation (QE) systems are important in situations where it is necessary to assess the quality of translations, but there is no reference available. This paper describes the approach adopted by the SurreyAI team for addressing the Sentence-Level Direct Assessment shared task in WMT23. The proposed approach builds upon the TransQuest framework, exploring various autoencoder pre-trained language models within the MonoTransQuest architecture using single and ensemble settings. The autoencoder pre-trained language models employed in the proposed systems are XLMV, InfoXLM-large, and XLMR-large. The evaluation utilizes Spearman and Pearson correlation coefficients, assessing the relationship between machine-predicted quality scores and human judgments for 5 language pairs (English-Gujarati, English-Hindi, English-Marathi, English-Tamil and English-Telugu). The MonoTQ-InfoXLM-large approach emerges as a robust strategy, surpassing all other individual models proposed in this study by significantly improving over the baseline for the majority of the language pairs.", }
Quality Estimation (QE) systems are important in situations where it is necessary to assess the quality of translations, but there is no reference available. This paper describes the approach adopted by the SurreyAI team for addressing the Sentence-Level Direct Assessment shared task in WMT23. The proposed approach builds upon the TransQuest framework, exploring various autoencoder pre-trained language models within the MonoTransQuest architecture using single and ensemble settings. The autoencoder pre-trained language models employed in the proposed systems are XLMV, InfoXLM-large, and XLMR-large. The evaluation utilizes Spearman and Pearson correlation coefficients, assessing the relationship between machine-predicted quality scores and human judgments for 5 language pairs (English-Gujarati, English-Hindi, English-Marathi, English-Tamil and English-Telugu). The MonoTQ-InfoXLM-large approach emerges as a robust strategy, surpassing all other individual models proposed in this study by significantly improving over the baseline for the majority of the language pairs.
[ "Sindhujan, Archchana", "Kanojia, Diptesh", "Orasan, Constantin", "Ranasinghe, Tharindu" ]
SurreyAI 2023 Submission for the Quality Estimation Shared Task
wmt-1.74
2312.00525
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.75.bib
https://aclanthology.org/2023.wmt-1.75/
@inproceedings{wu-etal-2023-mmts, title = "{MMT}{'}s Submission for the {WMT} 2023 Quality Estimation Shared Task", author = "Wu, Yulong and Schlegel, Viktor and Beck, Daniel and Batista-Navarro, Riza", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.75", doi = "10.18653/v1/2023.wmt-1.75", pages = "856--862", abstract = "This paper presents our submission to the WMT 2023 Quality Estimation (QE) shared task 1 (sentence-level subtask). We propose a straightforward training data augmentation approach aimed at improving the correlation between QE model predictions and human quality assessments. Utilising eleven data augmentation approaches and six distinct language pairs, we systematically create augmented training sets by individually applying each method to the original training set of each respective language pair. By evaluating the performance gap between the model before and after training on the augmented dataset, as measured on the development set, we assess the effectiveness of each augmentation method. Experimental results reveal that synonym replacement via the Paraphrase Database (PPDB) yields the most substantial performance boost for language pairs English-German, English-Marathi and English-Gujarati, while for the remaining language pairs, methods such as contextual word embeddings-based words insertion, back translation, and direct paraphrasing prove to be more effective. Training the model on a more diverse and larger set of samples does confer further performance improvements for certain language pairs, albeit to a marginal extent, and this phenomenon is not universally applicable. At the time of submission, we select the model trained on the augmented dataset constructed using the respective most effective method to generate predictions for the test set in each language pair, except for the English-German. Despite not being highly competitive, our system consistently surpasses the baseline performance on most language pairs and secures a third-place ranking in the English-Marathi.", }
This paper presents our submission to the WMT 2023 Quality Estimation (QE) shared task 1 (sentence-level subtask). We propose a straightforward training data augmentation approach aimed at improving the correlation between QE model predictions and human quality assessments. Utilising eleven data augmentation approaches and six distinct language pairs, we systematically create augmented training sets by individually applying each method to the original training set of each respective language pair. By evaluating the performance gap between the model before and after training on the augmented dataset, as measured on the development set, we assess the effectiveness of each augmentation method. Experimental results reveal that synonym replacement via the Paraphrase Database (PPDB) yields the most substantial performance boost for language pairs English-German, English-Marathi and English-Gujarati, while for the remaining language pairs, methods such as contextual word embeddings-based words insertion, back translation, and direct paraphrasing prove to be more effective. Training the model on a more diverse and larger set of samples does confer further performance improvements for certain language pairs, albeit to a marginal extent, and this phenomenon is not universally applicable. At the time of submission, we select the model trained on the augmented dataset constructed using the respective most effective method to generate predictions for the test set in each language pair, except for the English-German. Despite not being highly competitive, our system consistently surpasses the baseline performance on most language pairs and secures a third-place ranking in the English-Marathi.
[ "Wu, Yulong", "Schlegel, Viktor", "Beck, Daniel", "Batista-Navarro, Riza" ]
MMT's Submission for the WMT 2023 Quality Estimation Shared Task
wmt-1.75
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.76.bib
https://aclanthology.org/2023.wmt-1.76/
@inproceedings{yan-2023-iol, title = "{IOL} Research{'}s Submission for {WMT} 2023 Quality Estimation Shared Task", author = "Yan, Zeyu", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.76", doi = "10.18653/v1/2023.wmt-1.76", pages = "863--871", abstract = "This paper presents the submissions of IOL Research in WMT 2023 quality estimation shared task. We participate in task 1 Quality Estimation on both sentence and word levels, which predicts sentence quality score and word quality tags. Our system is a cross-lingual and multitask model for both sentence and word levels. We utilize several multilingual Pretrained Language Models (PLMs) as backbones and build task modules on them to achieve better predictions. A regression module on PLM is used to predict sentence level score and word tagging layer is used to classify the tag of each word in the translation based on the encoded representations from PLM. Each PLM is pretrained on quality estimation and metrics data from the previous WMT tasks before finetuning on training data this year. Furthermore, we integrate predictions from different models for better performance while the weights of each model are automatically searched and optimized by performance on Dev set. Our method achieves competitive results.", }
This paper presents the submissions of IOL Research in WMT 2023 quality estimation shared task. We participate in task 1 Quality Estimation on both sentence and word levels, which predicts sentence quality score and word quality tags. Our system is a cross-lingual and multitask model for both sentence and word levels. We utilize several multilingual Pretrained Language Models (PLMs) as backbones and build task modules on them to achieve better predictions. A regression module on PLM is used to predict sentence level score and word tagging layer is used to classify the tag of each word in the translation based on the encoded representations from PLM. Each PLM is pretrained on quality estimation and metrics data from the previous WMT tasks before finetuning on training data this year. Furthermore, we integrate predictions from different models for better performance while the weights of each model are automatically searched and optimized by performance on Dev set. Our method achieves competitive results.
[ "Yan, Zeyu" ]
IOL Research's Submission for WMT 2023 Quality Estimation Shared Task
wmt-1.76
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.77.bib
https://aclanthology.org/2023.wmt-1.77/
@inproceedings{chen-wang-2023-sjtu, title = "{SJTU}-{MTLAB}{'}s Submission to the {WMT}23 Word-Level Auto Completion Task", author = "Chen, Xingyu and Wang, Rui", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.77", doi = "10.18653/v1/2023.wmt-1.77", pages = "872--876", abstract = "Word-level auto-completion (WLAC) plays a crucial role in Computer-Assisted Translation. In this paper, we describe the SJTU-MTLAB{'}s submission to the WMT23 WLAC task. We propose a joint method to incorporate the machine translation task to the WLAC task. The proposed approach is general and can be applied to various encoder-based architectures. Through extensive experiments, we demonstrate that our approach can greatly improve performance, while maintaining significantly small model sizes.", }
Word-level auto-completion (WLAC) plays a crucial role in Computer-Assisted Translation. In this paper, we describe the SJTU-MTLAB{'}s submission to the WMT23 WLAC task. We propose a joint method to incorporate the machine translation task to the WLAC task. The proposed approach is general and can be applied to various encoder-based architectures. Through extensive experiments, we demonstrate that our approach can greatly improve performance, while maintaining significantly small model sizes.
[ "Chen, Xingyu", "Wang, Rui" ]
SJTU-MTLAB's Submission to the WMT23 Word-Level Auto Completion Task
wmt-1.77
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.78.bib
https://aclanthology.org/2023.wmt-1.78/
@inproceedings{navarro-etal-2023-prhlts, title = "{PRHLT}{'}s Submission to {WLAC} 2023", author = "Navarro, Angel and Domingo, Miguel and Casacuberta, Francisco", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.78", doi = "10.18653/v1/2023.wmt-1.78", pages = "877--881", abstract = "This paper describes our submission to the Word-Level AutoCompletion shared task of WMT23. We participated in the English{--}German and German{--}English categories. We extended our last year segment-based interactive machine translation approach to address its weakness when no context is available. Additionally, we fine-tune the pre-trained mT5 large language model to be used for autocompletion.", }
This paper describes our submission to the Word-Level AutoCompletion shared task of WMT23. We participated in the English{--}German and German{--}English categories. We extended our last year segment-based interactive machine translation approach to address its weakness when no context is available. Additionally, we fine-tune the pre-trained mT5 large language model to be used for autocompletion.
[ "Navarro, Angel", "Domingo, Miguel", "Casacuberta, Francisco" ]
PRHLT's Submission to WLAC 2023
wmt-1.78
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.79.bib
https://aclanthology.org/2023.wmt-1.79/
@inproceedings{wu-etal-2023-knowcomp, title = "{K}now{C}omp Submission for {WMT}23 Word-Level {A}uto{C}ompletion Task", author = "Wu, Yi and Shi, Haochen and Wang, Weiqi and Song, Yangqiu", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.79", doi = "10.18653/v1/2023.wmt-1.79", pages = "882--889", abstract = "The NLP community has recently witnessed the success of Large Language Models (LLMs) across various Natural Language Processing (NLP) tasks. However, the potential of LLMs for word-level auto-completion in a multilingual context has not been thoroughly explored yet. To address this gap and benchmark the performance of LLMs, we propose an LLM-based system for the WMT23 Word-Level Auto-Completion (WLAC) task. Our system utilizes ChatGPT to represent LLMs and evaluates its performance in three translation directions: Chinese-English, German-English, and English-German. We also study the task under zero-shot and few-shot settings to assess the potential benefits of incorporating exemplars from the training set in guiding the LLM to perform the task. The results of our experiments show that, on average, our system attains a 29.8{\%} accuracy on the test set. Further analyses reveal that LLMs struggle with WLAC in the zero-shot setting, but performance significantly improves with the help of additional exemplars, though some common errors still appear frequently. These findings have important implications for incorporating LLMs into computer-aided translation systems, as they can potentially enhance the quality of translations. Our codes for evaluation are available at https://github.com/ethanyiwu/WLAC.", }
The NLP community has recently witnessed the success of Large Language Models (LLMs) across various Natural Language Processing (NLP) tasks. However, the potential of LLMs for word-level auto-completion in a multilingual context has not been thoroughly explored yet. To address this gap and benchmark the performance of LLMs, we propose an LLM-based system for the WMT23 Word-Level Auto-Completion (WLAC) task. Our system utilizes ChatGPT to represent LLMs and evaluates its performance in three translation directions: Chinese-English, German-English, and English-German. We also study the task under zero-shot and few-shot settings to assess the potential benefits of incorporating exemplars from the training set in guiding the LLM to perform the task. The results of our experiments show that, on average, our system attains a 29.8{\%} accuracy on the test set. Further analyses reveal that LLMs struggle with WLAC in the zero-shot setting, but performance significantly improves with the help of additional exemplars, though some common errors still appear frequently. These findings have important implications for incorporating LLMs into computer-aided translation systems, as they can potentially enhance the quality of translations. Our codes for evaluation are available at https://github.com/ethanyiwu/WLAC.
[ "Wu, Yi", "Shi, Haochen", "Wang, Weiqi", "Song, Yangqiu" ]
KnowComp Submission for WMT23 Word-Level AutoCompletion Task
wmt-1.79
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.80.bib
https://aclanthology.org/2023.wmt-1.80/
@inproceedings{bogoychev-chen-2023-terminology, title = "Terminology-Aware Translation with Constrained Decoding and Large Language Model Prompting", author = "Bogoychev, Nikolay and Chen, Pinzhen", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.80", doi = "10.18653/v1/2023.wmt-1.80", pages = "890--896", abstract = "Terminology correctness is important in the downstream application of machine translation, and a prevalent way to ensure this is to inject terminology constraints into a translation system. In our submission to the WMT 2023 terminology translation task, we adopt a translate-then-refine approach which can be domain-independent and requires minimal manual efforts. We annotate random source words with pseudo-terminology translations obtained from word alignment to first train a terminology-aware model. Further, we explore two post-processing methods. First, we use an alignment process to discover whether a terminology constraint has been violated, and if so, we re-decode with the violating word negatively constrained. Alternatively, we leverage a large language model to refine a hypothesis by providing it with terminology constraints. Results show that our terminology-aware model learns to incorporate terminologies effectively, and the large language model refinement process can further improve terminology recall.", }
Terminology correctness is important in the downstream application of machine translation, and a prevalent way to ensure this is to inject terminology constraints into a translation system. In our submission to the WMT 2023 terminology translation task, we adopt a translate-then-refine approach which can be domain-independent and requires minimal manual efforts. We annotate random source words with pseudo-terminology translations obtained from word alignment to first train a terminology-aware model. Further, we explore two post-processing methods. First, we use an alignment process to discover whether a terminology constraint has been violated, and if so, we re-decode with the violating word negatively constrained. Alternatively, we leverage a large language model to refine a hypothesis by providing it with terminology constraints. Results show that our terminology-aware model learns to incorporate terminologies effectively, and the large language model refinement process can further improve terminology recall.
[ "Bogoychev, Nikolay", "Chen, Pinzhen" ]
Terminology-Aware Translation with Constrained Decoding and Large Language Model Prompting
wmt-1.80
2310.05824
[ "" ]
https://huggingface.co/papers/2310.05824
1
1
0
2
[]
[]
[]
1
Poster
https://aclanthology.org/2023.wmt-1.81.bib
https://aclanthology.org/2023.wmt-1.81/
@inproceedings{liu-etal-2023-lingua, title = "Lingua Custodia{'}s Participation at the {WMT} 2023 Terminology Shared Task", author = {Liu, Jingshu and Nakhl{\'e}, Mariam and Caillout, Ga{\"e}tan and Qadar, Raheel}, editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.81", doi = "10.18653/v1/2023.wmt-1.81", pages = "897--901", abstract = "This paper presents Lingua Custodia{'}s submission to the WMT23 shared task on Terminology shared task. Ensuring precise translation of technical terms plays a pivotal role in gauging the final quality of machine translation results. Our goal is to follow the terminology constraint while applying the machine translation system. Inspired by the recent work of terminology control, we propose to annotate the machine learning training data by leveraging a synthetic dictionary extracted in a fully non supervised way from the give parallel corpora. The model learned with this training data can then be then used to translate text with a given terminology in a flexible manner. In addition, we introduce a careful annotated data re-sampling step in order to guide the model to see different terminology types enough times. In this task we consider all the three language directions: Chinese to English, English to Czech and German to English. Our automatic evaluation metrics with the submitted systems show the effectiveness of the proposed method.", }
This paper presents Lingua Custodia{'}s submission to the WMT23 shared task on Terminology shared task. Ensuring precise translation of technical terms plays a pivotal role in gauging the final quality of machine translation results. Our goal is to follow the terminology constraint while applying the machine translation system. Inspired by the recent work of terminology control, we propose to annotate the machine learning training data by leveraging a synthetic dictionary extracted in a fully non supervised way from the give parallel corpora. The model learned with this training data can then be then used to translate text with a given terminology in a flexible manner. In addition, we introduce a careful annotated data re-sampling step in order to guide the model to see different terminology types enough times. In this task we consider all the three language directions: Chinese to English, English to Czech and German to English. Our automatic evaluation metrics with the submitted systems show the effectiveness of the proposed method.
[ "Liu, Jingshu", "Nakhl{\\'e}, Mariam", "Caillout, Ga{\\\"e}tan", "Qadar, Raheel" ]
Lingua Custodia's Participation at the WMT 2023 Terminology Shared Task
wmt-1.81
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.82.bib
https://aclanthology.org/2023.wmt-1.82/
@inproceedings{moslem-etal-2023-domain, title = "Domain Terminology Integration into Machine Translation: Leveraging Large Language Models", author = "Moslem, Yasmin and Romani, Gianfranco and Molaei, Mahdi and Kelleher, John D. and Haque, Rejwanul and Way, Andy", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.82", doi = "10.18653/v1/2023.wmt-1.82", pages = "902--911", abstract = "This paper discusses the methods that we used for our submissions to the WMT 2023 Terminology Shared Task for German-to-English (DE-EN), English-to-Czech (EN-CS), and Chinese-to-English (ZH-EN) language pairs. The task aims to advance machine translation (MT) by challenging participants to develop systems that accurately translate technical terms, ultimately enhancing communication and understanding in specialised domains. To this end, we conduct experiments that utilise large language models (LLMs) for two purposes: generating synthetic bilingual terminology-based data, and post-editing translations generated by an MT model through incorporating pre-approved terms. Our system employs a four-step process: (i) using an LLM to generate bilingual synthetic data based on the provided terminology, (ii) fine-tuning a generic encoder-decoder MT model, with a mix of the terminology-based synthetic data generated in the first step and a randomly sampled portion of the original generic training data, (iii) generating translations with the fine-tuned MT model, and (iv) finally, leveraging an LLM for terminology-constrained automatic post-editing of the translations that do not include the required terms. The results demonstrate the effectiveness of our proposed approach in improving the integration of pre-approved terms into translations. The number of terms incorporated into the translations of the blind dataset increases from an average of 36.67{\%} with the generic model to an average of 72.88{\%} by the end of the process. In other words, successful utilisation of terms nearly doubles across the three language pairs.", }
This paper discusses the methods that we used for our submissions to the WMT 2023 Terminology Shared Task for German-to-English (DE-EN), English-to-Czech (EN-CS), and Chinese-to-English (ZH-EN) language pairs. The task aims to advance machine translation (MT) by challenging participants to develop systems that accurately translate technical terms, ultimately enhancing communication and understanding in specialised domains. To this end, we conduct experiments that utilise large language models (LLMs) for two purposes: generating synthetic bilingual terminology-based data, and post-editing translations generated by an MT model through incorporating pre-approved terms. Our system employs a four-step process: (i) using an LLM to generate bilingual synthetic data based on the provided terminology, (ii) fine-tuning a generic encoder-decoder MT model, with a mix of the terminology-based synthetic data generated in the first step and a randomly sampled portion of the original generic training data, (iii) generating translations with the fine-tuned MT model, and (iv) finally, leveraging an LLM for terminology-constrained automatic post-editing of the translations that do not include the required terms. The results demonstrate the effectiveness of our proposed approach in improving the integration of pre-approved terms into translations. The number of terms incorporated into the translations of the blind dataset increases from an average of 36.67{\%} with the generic model to an average of 72.88{\%} by the end of the process. In other words, successful utilisation of terms nearly doubles across the three language pairs.
[ "Moslem, Yasmin", "Romani, Gianfranco", "Molaei, Mahdi", "Kelleher, John D.", "Haque, Rejwanul", "Way, Andy" ]
Domain Terminology Integration into Machine Translation: Leveraging Large Language Models
wmt-1.82
2310.14451
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.83.bib
https://aclanthology.org/2023.wmt-1.83/
@inproceedings{nieminen-2023-opus, title = "{OPUS}-{CAT} Terminology Systems for the {WMT}23 Terminology Shared Task", author = "Nieminen, Tommi", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.83", doi = "10.18653/v1/2023.wmt-1.83", pages = "912--918", abstract = "This paper describes the submission of the OPUS-CAT project to the WMT 2023 terminology shared task. We trained systems for all three language pairs included in the task. All systems were trained using the same training pipeline with identical methods. Support for terminology was implemented by using the currently popular method of annotating source language terms in the training data with the corresponding target language terms.", }
This paper describes the submission of the OPUS-CAT project to the WMT 2023 terminology shared task. We trained systems for all three language pairs included in the task. All systems were trained using the same training pipeline with identical methods. Support for terminology was implemented by using the currently popular method of annotating source language terms in the training data with the corresponding target language terms.
[ "Nieminen, Tommi" ]
OPUS-CAT Terminology Systems for the WMT23 Terminology Shared Task
wmt-1.83
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.84.bib
https://aclanthology.org/2023.wmt-1.84/
@inproceedings{park-etal-2023-varco, title = "{VARCO}-{MT}: {NCSOFT}{'}s {WMT}{'}23 Terminology Shared Task Submission", author = "Park, Geon Woo and Lee, Junghwa and Ren, Meiying and Shindell, Allison and Lee, Yeonsoo", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.84", doi = "10.18653/v1/2023.wmt-1.84", pages = "919--925", abstract = "A lack of consistency in terminology translation undermines quality of translation from even the best performing neural machine translation (NMT) models, especially in narrow domains like literature, medicine, and video game jargon. Dictionaries containing terminologies and their translations are often used to improve consistency but are difficult to construct and incorporate. We accompany our submissions to the WMT {`}23 Terminology Shared Task with a description of our experimental setup and procedure where we propose a framework of terminology-aware machine translation. Our framework comprises of an automatic terminology extraction process that constructs terminology-aware machine translation data in low-supervision settings and two model architectures with terminology constraints. Our models outperform baseline models by 21.51{\%}p and 19.36{\%}p in terminology recall respectively on the Chinese to English WMT{'}23 Terminology Shared Task test data.", }
A lack of consistency in terminology translation undermines quality of translation from even the best performing neural machine translation (NMT) models, especially in narrow domains like literature, medicine, and video game jargon. Dictionaries containing terminologies and their translations are often used to improve consistency but are difficult to construct and incorporate. We accompany our submissions to the WMT {`}23 Terminology Shared Task with a description of our experimental setup and procedure where we propose a framework of terminology-aware machine translation. Our framework comprises of an automatic terminology extraction process that constructs terminology-aware machine translation data in low-supervision settings and two model architectures with terminology constraints. Our models outperform baseline models by 21.51{\%}p and 19.36{\%}p in terminology recall respectively on the Chinese to English WMT{'}23 Terminology Shared Task test data.
[ "Park, Geon Woo", "Lee, Junghwa", "Ren, Meiying", "Shindell, Allison", "Lee, Yeonsoo" ]
VARCO-MT: NCSOFT's WMT'23 Terminology Shared Task Submission
wmt-1.84
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.85.bib
https://aclanthology.org/2023.wmt-1.85/
@inproceedings{yu-etal-2023-hw, title = "{HW}-{TSC}{'}s Participation in the {WMT} 2023 Automatic Post Editing Shared Task", author = "Yu, Jiawei and Zhang, Min and Yanqing, Zhao and Zhao, Xiaofeng and Li, Yuang and Chang, Su and Li, Yinglu and Miaomiao, Ma and Tao, Shimin and Yang, Hao", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.85", doi = "10.18653/v1/2023.wmt-1.85", pages = "926--930", abstract = "The paper presents the submission by HW-TSC in the WMT 2023 Automatic Post Editing (APE) shared task for the English-Marathi (En-Mr) language pair. Our method encompasses several key steps. First, we pre-train an APE model by utilizing synthetic APE data provided by the official task organizers. Then, we fine-tune the model by employing real APE data. For data augmentation, we incorporate candidate translations obtained from an external Machine Translation (MT) system. Furthermore, we integrate the En-Mr parallel corpus from the Flores-200 dataset into our training data. To address the overfitting issue, we employ R-Drop during the training phase. Given that APE systems tend to exhibit a tendency of {`}over-correction{'}, we employ a sentence-level Quality Estimation (QE) system to select the final output, deciding between the original translation and the corresponding output generated by the APE model. Our experiments demonstrate that pre-trained APE models are effective when being fine-tuned with the APE corpus of a limited size, and the performance can be further improved with external MT augmentation. Our approach improves the TER and BLEU scores on the development set by -2.42 and +3.76 points, respectively.", }
The paper presents the submission by HW-TSC in the WMT 2023 Automatic Post Editing (APE) shared task for the English-Marathi (En-Mr) language pair. Our method encompasses several key steps. First, we pre-train an APE model by utilizing synthetic APE data provided by the official task organizers. Then, we fine-tune the model by employing real APE data. For data augmentation, we incorporate candidate translations obtained from an external Machine Translation (MT) system. Furthermore, we integrate the En-Mr parallel corpus from the Flores-200 dataset into our training data. To address the overfitting issue, we employ R-Drop during the training phase. Given that APE systems tend to exhibit a tendency of {`}over-correction{'}, we employ a sentence-level Quality Estimation (QE) system to select the final output, deciding between the original translation and the corresponding output generated by the APE model. Our experiments demonstrate that pre-trained APE models are effective when being fine-tuned with the APE corpus of a limited size, and the performance can be further improved with external MT augmentation. Our approach improves the TER and BLEU scores on the development set by -2.42 and +3.76 points, respectively.
[ "Yu, Jiawei", "Zhang, Min", "Yanqing, Zhao", "Zhao, Xiaofeng", "Li, Yuang", "Chang, Su", "Li, Yinglu", "Miaomiao, Ma", "Tao, Shimin", "Yang, Hao" ]
HW-TSC's Participation in the WMT 2023 Automatic Post Editing Shared Task
wmt-1.85
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.86.bib
https://aclanthology.org/2023.wmt-1.86/
@inproceedings{agrawal-etal-2023-neural, title = "Neural Machine Translation for {E}nglish - {M}anipuri and {E}nglish - {A}ssamese", author = "Agrawal, Goutam and Das, Rituraj and Biswas, Anupam and Thounaojam, Dalton Meitei", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.86", doi = "10.18653/v1/2023.wmt-1.86", pages = "931--934", abstract = "The internet is a vast repository of valuable information available in English, but for many people who are more comfortable with their regional languages, accessing this knowledge can be a challenge. Manually translating this kind of text, is a laborious, expensive, and time-consuming operation. This makes machine translation an effective method for translating texts without the need for human intervention. One of the newest and most efficient translation methods among the current machine translation systems is neural machine translation (NMT). In this WMT23 shared task: low resource indic language translation challenge, our team named ATULYA-NITS used the NMT transformer model for the English to/from Assamese and English to/from Manipuri language translation. Our systems achieved the BLEU score of 15.02 for English to Manipuri, 18.7 for Manipuri to English, 5.47 for English to Assamese, and 8.5 for Assamese to English.", }
The internet is a vast repository of valuable information available in English, but for many people who are more comfortable with their regional languages, accessing this knowledge can be a challenge. Manually translating this kind of text, is a laborious, expensive, and time-consuming operation. This makes machine translation an effective method for translating texts without the need for human intervention. One of the newest and most efficient translation methods among the current machine translation systems is neural machine translation (NMT). In this WMT23 shared task: low resource indic language translation challenge, our team named ATULYA-NITS used the NMT transformer model for the English to/from Assamese and English to/from Manipuri language translation. Our systems achieved the BLEU score of 15.02 for English to Manipuri, 18.7 for Manipuri to English, 5.47 for English to Assamese, and 8.5 for Assamese to English.
[ "Agrawal, Goutam", "Das, Rituraj", "Biswas, Anupam", "Thounaojam, Dalton Meitei" ]
Neural Machine Translation for English - Manipuri and English - Assamese
wmt-1.86
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.87.bib
https://aclanthology.org/2023.wmt-1.87/
@inproceedings{ahmed-etal-2023-guit, title = "{GUIT}-{NLP}{'}s Submission to Shared Task: Low Resource {I}ndic Language Translation", author = "Ahmed, Mazida and Talukdar, Kuwali and Boruah, Parvez and Sarma, Prof. Shikhar Kumar and Kashyap, Kishore", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.87", doi = "10.18653/v1/2023.wmt-1.87", pages = "935--940", abstract = "This paper describes the submission of the GUIT-NLP team in the {``}Shared Task: Low Resource Indic Language Translation{''} focusing on three low-resource language pairs: English-Mizo, English-Khasi, and English-Assamese. The initial phase involves an in-depth exploration of Neural Machine Translation (NMT) techniques tailored to the available data. Within this investigation, various Subword Tokenization approaches, model configurations (exploring differnt hyper-parameters etc.) of the general NMT pipeline are tested to identify the most effective method. Subsequently, we address the challenge of low-resource languages by leveraging monolingual data through an innovative and systematic application of the Back Translation technique for English-Mizo. During model training, the monolingual data is progressively integrated into the original bilingual dataset, with each iteration yielding higher-quality back translations. This iterative approach significantly enhances the model{'}s performance, resulting in a notable increase of +3.65 in BLEU scores. Further improvements of +5.59 are achieved through fine-tuning using authentic parallel data.", }
This paper describes the submission of the GUIT-NLP team in the {``}Shared Task: Low Resource Indic Language Translation{''} focusing on three low-resource language pairs: English-Mizo, English-Khasi, and English-Assamese. The initial phase involves an in-depth exploration of Neural Machine Translation (NMT) techniques tailored to the available data. Within this investigation, various Subword Tokenization approaches, model configurations (exploring differnt hyper-parameters etc.) of the general NMT pipeline are tested to identify the most effective method. Subsequently, we address the challenge of low-resource languages by leveraging monolingual data through an innovative and systematic application of the Back Translation technique for English-Mizo. During model training, the monolingual data is progressively integrated into the original bilingual dataset, with each iteration yielding higher-quality back translations. This iterative approach significantly enhances the model{'}s performance, resulting in a notable increase of +3.65 in BLEU scores. Further improvements of +5.59 are achieved through fine-tuning using authentic parallel data.
[ "Ahmed, Mazida", "Talukdar, Kuwali", "Boruah, Parvez", "Sarma, Prof. Shikhar Kumar", "Kashyap, Kishore" ]
GUIT-NLP's Submission to Shared Task: Low Resource Indic Language Translation
wmt-1.87
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.88.bib
https://aclanthology.org/2023.wmt-1.88/
@inproceedings{dabre-etal-2023-nict, title = "{NICT}-{AI}4{B}{'}s Submission to the {I}ndic {MT} Shared Task in {WMT} 2023", author = "Dabre, Raj and Gala, Jay and Chitale, Pranjal A.", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.88", doi = "10.18653/v1/2023.wmt-1.88", pages = "941--949", abstract = "In this paper, we (Team NICT-AI4B) describe our MT systems that we submit to the Indic MT task in WMT 2023. Our primary system consists of 3 stages: Joint denoising and MT training using officially approved monolingual and parallel corpora, backtranslation and, MT training on original and backtranslated parallel corpora. We observe that backtranslation leads to substantial improvements in translation quality up to 4 BLEU points. We also develop 2 contrastive systems on unconstrained settings, where the first system involves fine-tuning of IndicTrans2 DA models on official parallel corpora and seed data used in AI4Bharat et al, (2023), and the second system involves a system combination of the primary and the aforementioned system. Overall, we manage to obtain high-quality translation systems for the 4 low-resource North-East Indian languages of focus.", }
In this paper, we (Team NICT-AI4B) describe our MT systems that we submit to the Indic MT task in WMT 2023. Our primary system consists of 3 stages: Joint denoising and MT training using officially approved monolingual and parallel corpora, backtranslation and, MT training on original and backtranslated parallel corpora. We observe that backtranslation leads to substantial improvements in translation quality up to 4 BLEU points. We also develop 2 contrastive systems on unconstrained settings, where the first system involves fine-tuning of IndicTrans2 DA models on official parallel corpora and seed data used in AI4Bharat et al, (2023), and the second system involves a system combination of the primary and the aforementioned system. Overall, we manage to obtain high-quality translation systems for the 4 low-resource North-East Indian languages of focus.
[ "Dabre, Raj", "Gala, Jay", "Chitale, Pranjal A." ]
NICT-AI4B's Submission to the Indic MT Shared Task in WMT 2023
wmt-1.88
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.89.bib
https://aclanthology.org/2023.wmt-1.89/
@inproceedings{gaikwad-etal-2023-machine, title = "Machine Translation Advancements for Low-Resource {I}ndian Languages in {WMT}23: {CFILT}-{IITB}{'}s Effort for Bridging the Gap", author = "Gaikwad, Pranav and Doshi, Meet and Deoghare, Sourabh and Bhattacharyya, Pushpak", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.89", doi = "10.18653/v1/2023.wmt-1.89", pages = "950--953", abstract = "This paper is related to the submission of the CFILT-IITB team for the task called IndicMT in WMT23. The paper describes our MT systems submitted to the WMT23 IndicMT shared task. The task focused on MT system development from/to English and four low-resource North-East Indian languages, viz., Assamese, Khasi, Manipuri, and Mizo. We trained them on a small parallel corpus resulting in poor-quality systems. Therefore, we utilize transfer learning with the help of a large pre-trained multilingual NMT system. Since this approach produced the best results, we submitted our NMT models for the shared task using this approach.", }
This paper is related to the submission of the CFILT-IITB team for the task called IndicMT in WMT23. The paper describes our MT systems submitted to the WMT23 IndicMT shared task. The task focused on MT system development from/to English and four low-resource North-East Indian languages, viz., Assamese, Khasi, Manipuri, and Mizo. We trained them on a small parallel corpus resulting in poor-quality systems. Therefore, we utilize transfer learning with the help of a large pre-trained multilingual NMT system. Since this approach produced the best results, we submitted our NMT models for the shared task using this approach.
[ "Gaikwad, Pranav", "Doshi, Meet", "Deoghare, Sourabh", "Bhattacharyya, Pushpak" ]
Machine Translation Advancements for Low-Resource Indian Languages in WMT23: CFILT-IITB's Effort for Bridging the Gap
wmt-1.89
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.90.bib
https://aclanthology.org/2023.wmt-1.90/
@inproceedings{kvapilikova-bojar-2023-low, title = "Low-Resource Machine Translation Systems for {I}ndic Languages", author = "Kvapil{\'\i}kov{\'a}, Ivana and Bojar, Ond{\v{r}}ej", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.90", doi = "10.18653/v1/2023.wmt-1.90", pages = "954--958", abstract = "We present our submission to the WMT23 shared task in translation between English and Assamese, Khasi, Mizo and Manipuri. All our systems were pretrained on the task of multilingual masked language modelling and denoising auto-encoding. Our primary systems for translation into English were further pretrained for multilingual MT in all four language directions and fine-tuned on the limited parallel data available for each language pair separately. We used online back-translation for data augmentation. The same systems were submitted as contrastive for translation out of English as the multilingual MT pretraining step seemed to harm the translation performance. Our primary systems for translation out of English were trained without the multilingual MT pretraining step. Other contrastive systems used additional pseudo-parallel data mined from monolingual corpora for pretraining.", }
We present our submission to the WMT23 shared task in translation between English and Assamese, Khasi, Mizo and Manipuri. All our systems were pretrained on the task of multilingual masked language modelling and denoising auto-encoding. Our primary systems for translation into English were further pretrained for multilingual MT in all four language directions and fine-tuned on the limited parallel data available for each language pair separately. We used online back-translation for data augmentation. The same systems were submitted as contrastive for translation out of English as the multilingual MT pretraining step seemed to harm the translation performance. Our primary systems for translation out of English were trained without the multilingual MT pretraining step. Other contrastive systems used additional pseudo-parallel data mined from monolingual corpora for pretraining.
[ "Kvapil{\\'\\i}kov{\\'a}, Ivana", "Bojar, Ond{\\v{r}}ej" ]
Low-Resource Machine Translation Systems for Indic Languages
wmt-1.90
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.91.bib
https://aclanthology.org/2023.wmt-1.91/
@inproceedings{signoroni-rychly-2023-muni, title = "{MUNI}-{NLP} Systems for Low-resource {I}ndic Machine Translation", author = "Signoroni, Edoardo and Rychly, Pavel", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.91", doi = "10.18653/v1/2023.wmt-1.91", pages = "959--966", abstract = "The WMT 2023 Shared Task on Low-Resource Indic Language Translation featured to and from Assamese, Khasi, Manipuri, Mizo on one side and English on the other. We submitted systems supervised neural machine translation systems for each pair and direction and experimented with different configurations and settings for both preprocessing and training. Even if most of them did not reach competitive performance, our experiments uncovered some interesting points for further investigation, namely the relation between dataset and model size, and the impact of the training framework. Moreover, the results of some of our preliminary experiments on the use of word embeddings initialization, backtranslation, and model depth were in contrast with previous work. The final results also show some disagreement in the automated metrics employed in the evaluation.", }
The WMT 2023 Shared Task on Low-Resource Indic Language Translation featured to and from Assamese, Khasi, Manipuri, Mizo on one side and English on the other. We submitted systems supervised neural machine translation systems for each pair and direction and experimented with different configurations and settings for both preprocessing and training. Even if most of them did not reach competitive performance, our experiments uncovered some interesting points for further investigation, namely the relation between dataset and model size, and the impact of the training framework. Moreover, the results of some of our preliminary experiments on the use of word embeddings initialization, backtranslation, and model depth were in contrast with previous work. The final results also show some disagreement in the automated metrics employed in the evaluation.
[ "Signoroni, Edoardo", "Rychly, Pavel" ]
MUNI-NLP Systems for Low-resource Indic Machine Translation
wmt-1.91
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.92.bib
https://aclanthology.org/2023.wmt-1.92/
@inproceedings{singh-etal-2023-nits, title = "{NITS}-{CNLP} Low-Resource Neural Machine Translation Systems of {E}nglish-{M}anipuri Language Pair", author = "Singh, Kshetrimayum Boynao and Ningthoujam, Avichandra Singh and Sanayai Meetei, Loitongbam and Bandyopadhyay, Sivaji and Singh, Thoudam Doren", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.92", doi = "10.18653/v1/2023.wmt-1.92", pages = "967--971", abstract = "This paper describes the transformer-based Neural Machine translation (NMT) system for the Low-Resource Indic Language Translation task for the English-Manipuri language pair submitted by the Centre for Natural Language Processing in National Institute of Technology Silchar, India (NITS-CNLP) in the WMT 2023 shared task. The model attained an overall BLEU score of 22.75 and 26.92 for the English to Manipuri and Manipuri to English translations respectively. Experimental results for English to Manipuri and Manipuri to English models for character level n-gram F-score (chrF) of 48.35 and 48.64, RIBES of 0.61 and 0.65, TER of 70.02 and 67.62, as well as COMET of 0.70 and 0.66 respectively are reported.", }
This paper describes the transformer-based Neural Machine translation (NMT) system for the Low-Resource Indic Language Translation task for the English-Manipuri language pair submitted by the Centre for Natural Language Processing in National Institute of Technology Silchar, India (NITS-CNLP) in the WMT 2023 shared task. The model attained an overall BLEU score of 22.75 and 26.92 for the English to Manipuri and Manipuri to English translations respectively. Experimental results for English to Manipuri and Manipuri to English models for character level n-gram F-score (chrF) of 48.35 and 48.64, RIBES of 0.61 and 0.65, TER of 70.02 and 67.62, as well as COMET of 0.70 and 0.66 respectively are reported.
[ "Singh, Kshetrimayum Boynao", "Ningthoujam, Avich", "ra Singh", "Sanayai Meetei, Loitongbam", "B", "yopadhyay, Sivaji", "Singh, Thoudam Doren" ]
NITS-CNLP Low-Resource Neural Machine Translation Systems of English-Manipuri Language Pair
wmt-1.92
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.wmt-1.93.bib
https://aclanthology.org/2023.wmt-1.93/
@inproceedings{suman-etal-2023-iacs, title = "{IACS}-{LRILT}: Machine Translation for Low-Resource {I}ndic Languages", author = "Suman, Dhairya and Mandal, Atanu and Pal, Santanu and Naskar, Sudip", editor = "Koehn, Philipp and Haddow, Barry and Kocmi, Tom and Monz, Christof", booktitle = "Proceedings of the Eighth Conference on Machine Translation", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.wmt-1.93", doi = "10.18653/v1/2023.wmt-1.93", pages = "972--977", abstract = "Even though, machine translation has seen huge improvements in the the last decade, translation quality for Indic languages is still underwhelming, which is attributed to the small amount of parallel data available. In this paper, we present our approach to mitigate the issue of the low amount of parallel training data availability for Indic languages, especially for the language pair English-Manipuri and Assamese-English. Our primary submission for the Manipuri-to-English translation task provided the best scoring system for this language direction. We describe about the systems we built in detail and our findings in the process.", }
Even though, machine translation has seen huge improvements in the the last decade, translation quality for Indic languages is still underwhelming, which is attributed to the small amount of parallel data available. In this paper, we present our approach to mitigate the issue of the low amount of parallel training data availability for Indic languages, especially for the language pair English-Manipuri and Assamese-English. Our primary submission for the Manipuri-to-English translation task provided the best scoring system for this language direction. We describe about the systems we built in detail and our findings in the process.
[ "Suman, Dhairya", "M", "al, Atanu", "Pal, Santanu", "Naskar, Sudip" ]
IACS-LRILT: Machine Translation for Low-Resource Indic Languages
wmt-1.93
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster