|
{ |
|
"paper_id": "2004", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:17:17.452958Z" |
|
}, |
|
"title": "The ISL EDTRL System", |
|
"authors": [ |
|
{ |
|
"first": "Juergen", |
|
"middle": [], |
|
"last": "Reichert", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University and Karlsruhe University", |
|
"location": {} |
|
}, |
|
"email": "juergen@ira.uka.de" |
|
}, |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Waibel", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Carnegie Mellon University and Karlsruhe University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "For the translation of text and speech, statistical methods on one side and interlingua based methods on the other have been used successfully. However, the former requires programming grammars for each language, plus the design of an interlingua, while the latter requires the collection of a large parallel corpus for every language pair. To alleviate these problems, we propose an approach that combines the advantages from both worlds. The proposed approach makes use of English or enriched English as an interlingua and can cascade data-driven translation systems into and from this interlingua. We show that enriching English with linguistic information that is automatically derived i only on English data performs better than pure cascaded systems.", |
|
"pdf_parse": { |
|
"paper_id": "2004", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "For the translation of text and speech, statistical methods on one side and interlingua based methods on the other have been used successfully. However, the former requires programming grammars for each language, plus the design of an interlingua, while the latter requires the collection of a large parallel corpus for every language pair. To alleviate these problems, we propose an approach that combines the advantages from both worlds. The proposed approach makes use of English or enriched English as an interlingua and can cascade data-driven translation systems into and from this interlingua. We show that enriching English with linguistic information that is automatically derived i only on English data performs better than pure cascaded systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "In recent years, a number of translation approaches have been proposed to provide for reliable meaningful translation of text and of speech from one language to another. These include direct approaches (statistical, example-based), transfer and interlingua based approaches. Translation performance is usually the one (if not the most important) consideration for the evaluation of these systems, but has to be balanced by considerations of robustness and portability as additional important dimensions to the translation problem. The translation of speech, in particular, is faced with both of these additional challenges: the input speech and its recognition is fragmentary, ill-formed and errorful, and speech translation systems are frequently required to handle multiple language pairs and language directions to allow for successful cross-lingual dialogs between humans.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "To accommodate these additional constraints a popular approach has been the interlingua approach to translation. Here, an intermediate representation of meaning is chosen to express the key idea or intent of the speaker. An input sentence is parsed in terms of its key semantic content, represented in an interlingua structure, and from there an equivalent sentence is generated in another language. The use of an interlingua has several advantages. First, adding a new languages to an existing system is simplified, since a new language has to translate only into and out of the interlingua, and we do not require separate translators for every other language pair the system supports. Second, the translation step extracts only the key intentions from a speaker's utterance, thereby handling colloquial expressions, and reducing the sensitivity to redundancies, and disfluencies in spoken language. Third, the system can generate paraphrases from the interlingua back into one's own language to provide meaningful feedback and verification of the translation, before it is delivered into another language.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "The advantages, however, come at a price: The extraction of the key content from a disfluent input sentence requires the development of semantic grammars that extract key information into frames, concepts and slots. Both, the design of a suitable, unambiguous and language independent interlingua as well as the development of grammars that map sentences to meaning are domain-dependent and have to be repeated for each topic or domain. Their development is labor intensive and requires both linguistic expertise and command of the language at hand. As an attempt to solve these problems, automatic learning is proposed to alleviate the manual development work. The most popular approaches at present are statistical and example-based methods. Both extract direct mappings from input to output language using large parallel corpora between these languages. Statistical machine translation permits automatic statistical learning to build a translator rather than manual programming. But a system has to be developed for each language pair. Each translator, in turn, requires a large parallel corpus for training. While parallel corpora are generally available for large common languages, it is rare to find large parallel corpora for more unusual language pairs (say, Paschtu-Catalan) and domains.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In this paper, we therefore develop an alternative strategy: the use of general English and a linguistically enriched English as interlingua. Here we avoid the manual design of an interlingua, and the writing of grammars for analysis and generation; but we also avoid the need for large parallel corpora for every language pair. Moreover, English as interlingua can be 'enriched' by linguistic information extracted in a data-driven fashion automatically and monolingually in English, where plenty of data exists.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1." |
|
}, |
|
{ |
|
"text": "In this section we describe the Error Driven Translation Rule Learning (EDTRL) translation system. The EDTRL system uses enriched English as an interlingua to translate from a source language into a target language going through described special interlingua as an intermediary step. This approach tries to combine the advantages of a system with an explicit interlingua and the advantages of a pure data-driven system. Thereby it becomes possible to add a new language to a given system with n languages very fast by only adding 2 components instead of n-1. The use of enriched English as an interlingua eliminates the need for an explicit, handcrafted interlingua specification and removes the domain limitation which is typical for interlingua-based translation systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Description of the EDTRL System", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "An additional benefit of this combination is the reduction of the 'Parallel Data Sparseness Problem'. For most non-English language pairs the amount of parallel text corpora is much smaller than the parallel text corpora from each of these languages paired with English. Using English as an interlingua can therefore increase the amount of available training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Description of the EDTRL System", |
|
"sec_num": "2." |
|
}, |
|
{ |
|
"text": "The EDTRL system is based on statistical transfer rules which are automatically learned from bilingual corpora. While the system can learn transfer rules from two non English languages and acts like a direct data-driven translation system, it is designed to use augmented, formalized English as Interlingua. Thus one language of the parallel corpus has to be English, and has to be standardized and annotated with additional linguistic information. The annotation and standardization process only depends on the English part of the parallel corpus and is consequently independent of the source and target language of the system. Annotations made on the English side are projected through the word nad phrase alignment models onto the source and target language. Some mapping errors are introduced by transferring the structural knowledge from English to some other language, but often that can be compensated through the higher quality and quantity of the available structured knowledge in English compared to most other languages.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Design Ideas of EDTRL", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "To allow for the use of a translation system in real-wordapplications a small footprint in memory and space as well as a fast translation process are important. To achieve these goals we decided instead to keep the whole statistical translation model to generate statistical rules from the model. Even these generated rules are to many to build a small and fast system, therefore we keep only a subset of all rules generated during the training and use an evaluation test set to determine the most significant and important rules. This allows us to find the best compromise between size and performance for each application domain. The use of probabilistic translation rules makes it easy to add new rules and even exceptions of existing rules. It also allows tracking translation errors and correcting them if necessary. This ability also leads to an interactive learning modus, where the user can teach the system and optimize its behavior.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Basic Design Ideas of EDTRL", |
|
"sec_num": "2.1." |
|
}, |
|
{ |
|
"text": "The standardization step tries to map alternative expressions with similar or equal meanings to the most common used alternative. Furthermore the sentence structure is simplified [SE] . E.g. more complex rarely used tenses are replaced by easier ones: He had spoken. He spoke. He would be speaking.", |
|
"cite_spans": [ |
|
{ |
|
"start": 179, |
|
"end": 183, |
|
"text": "[SE]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Standardized and Simplified English", |
|
"sec_num": "2.1.1." |
|
}, |
|
{ |
|
"text": "He would speak. These kinds of simplifications of course remove information, but often such fine nuances are of little value to the quality of the translation given the current state of the MT systems. In most cases the translation profits from the transformations through more reliable alignments and better utilization of the training data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Standardized and Simplified English", |
|
"sec_num": "2.1.1." |
|
}, |
|
{ |
|
"text": "Even humans can benefit from Simplified English in some technical domains [AECMA] . Sometimes English utterances have some freedom in word order without changing the main meaning of the utterance. To obtain a consistent word order some simple rules are applied: E.g. please give me \u2026 give me \u2026 please", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 81, |
|
"text": "[AECMA]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Standardized and Simplified English", |
|
"sec_num": "2.1.1." |
|
}, |
|
{ |
|
"text": "The translation errors from the intermediate English to the target language can be reduced if not only the best hypothesis, but additional information from the search is used. We examined the following methods: n-best list of complete translations: The translation system produces up to n alternative translation hypotheses and passes them to the second translation step. The number of hypotheses has to be kept small to guarantee fast overall decoding, thereby allowing only for little variability.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preserve translation alternatives", |
|
"sec_num": "2.1.2." |
|
}, |
|
{ |
|
"text": "n-best word or phrase alternatives to the best hypothesis: This method selects the single best hypothesis from the first translation step, but augments it by adding alternative words or phrases, which have high translation probabilities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preserve translation alternatives", |
|
"sec_num": "2.1.2." |
|
}, |
|
{ |
|
"text": "Full lattice: In order not to fix one translation hypothesis as the basis for constructing these alternatives, we can also pass on full translation lattices. Using a lattice as input for the second translation step has been shown as the most profitable way to use translation alternatives to improve the translation quality.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preserve translation alternatives", |
|
"sec_num": "2.1.2." |
|
}, |
|
{ |
|
"text": "Besides translation alternatives, further information on the structure and the semantic content of a sentence can be helpful. Therefore we incorporated the following additional knowledge sources into our system to provide information for the translation process: Morphological Analyzer: Starting from the WordNet ontology [WordNet] we built a system to analyse an English word form and determine its base form and derivation rule. The analyzer contains a set of common transformation rules and an even larger list of exceptions from these rules. In the current implementation, each word is analysed without using its context or information from former sentences. The precision for finding the base class is over 95% while the determination of the derivation rules is not yet that good.", |
|
"cite_spans": [ |
|
{ |
|
"start": 322, |
|
"end": 331, |
|
"text": "[WordNet]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Additional knowledge sources", |
|
"sec_num": "2.1.3." |
|
}, |
|
{ |
|
"text": "Sense Guesser: The sense guesser tries to find the sense of a word. Many words have different meanings depending on the context in which they occur. E.g. table can have the senses 'desk' or 'chart'. Often the context of the word can be used for disambiguation. In our example, the context 'in the' assigns table to the chart-class, while 'on the' assigns it to the desk-class. We used the sense hierarchy from WordNet.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Additional knowledge sources", |
|
"sec_num": "2.1.3." |
|
}, |
|
{ |
|
"text": "Synonym Generator: WordNet also lists synonyms for words, all within the well structured and linked hierarchy. Both Sense Guesser and Synonym Generator only use open word classes like nouns, verbs, adjectives, and adverbs.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Additional knowledge sources", |
|
"sec_num": "2.1.3." |
|
}, |
|
{ |
|
"text": "Part-of-Speech Tagger: a statistical Part-of-Speech tagger was used to provide POS-tags. The tagger uses the tag set which is described in [Brill 1995] and was trained on the tagged Brown Corpus.", |
|
"cite_spans": [ |
|
{ |
|
"start": 139, |
|
"end": 151, |
|
"text": "[Brill 1995]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Additional knowledge sources", |
|
"sec_num": "2.1.3." |
|
}, |
|
{ |
|
"text": "Named Entity Tagger: a prototype of some handwritten rules allows us to find named entities, which often should be treated in a special way.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Additional knowledge sources", |
|
"sec_num": "2.1.3." |
|
}, |
|
{ |
|
"text": "Further knowledge sources like sentence type, active or passive voice, politeness, domain or category could also be added. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Additional knowledge sources", |
|
"sec_num": "2.1.3." |
|
}, |
|
{ |
|
"text": "Besides the information from the different sources also a probability or confidence measure for each of the knowledge sources and alternatives is added to the Interlingua. Therefore words and phrases carry attributes with possibilities and their possible alternatives. All this combined additional information forms the interlingua (intermediate representation) for the EDTRL system. For translating into English the interlingua can easily be transformed into plain English by stripping off all additional information and using the most likely alternative. For translating from English into some other language the additional information can be added directly, i.e. transforming plain English into the annotated form which is then used as the interlingua.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Probabilities and Confidence Measures", |
|
"sec_num": "2.1.4." |
|
}, |
|
{ |
|
"text": "In the ideal case the learning process only needs parallel texts and optional dictionaries to and from English, because all other knowledge sources operate on English and are independent from the input and output language. However available direct parallel texts or dictionaries from the source to the target language could be incorporated into the system.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Training and Translating", |
|
"sec_num": "2.2." |
|
}, |
|
{ |
|
"text": "In a first step a word alignment (IBM1 or modified IBM2) is performed. In a second step a phrase alignment based on the word alignment is executed, which simultaneously joins similar regions on the word alignment matrix and splits the matrix into smaller parts. For these splitting and joining operations normalized probabilities from the word alignment and the language models are used. The phrase alignment generates a collection of partitions of the word alignment matrix and their probabilities.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Statistical Alignment", |
|
"sec_num": "2.2.1." |
|
}, |
|
{ |
|
"text": "To enhance the quality of the statistical alignment, weight functions are introduced, which change the weights of a sentence alignment in a special manner according to a heuristic concept. Different weight functions are examined.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weight Functions for the Alignment", |
|
"sec_num": "2.2.2." |
|
}, |
|
{ |
|
"text": "A) The Weight Position Factor takes into account, that in parallel sentences the source word positions are not independent from the corresponding translation word positions. Often they lie next to the diagonal of the alignment matrix. The following formula can give them a higher weight.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weight Functions for the Alignment", |
|
"sec_num": "2.2.2." |
|
}, |
|
{ |
|
"text": "\u00a1 \u00a2 \u00a3 \u00a4 \u00a5 \u2022 \u2212 WordsB WordsA WordBPos WordAPos k # # B)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weight Functions for the Alignment", |
|
"sec_num": "2.2.2." |
|
}, |
|
{ |
|
"text": "The Length Penalty consider the assumption that longer utterance often results in less accurate alignments and so they are punished using the following expression ) log(len k C) Parallel utterances of significant different length often produce alignments of minor quality. Therefore the Matching Length Factor prefers utterances with almost the same length.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weight Functions for the Alignment", |
|
"sec_num": "2.2.2." |
|
}, |
|
{ |
|
"text": "D) The Frequency Weight keep in mind, that alignments between words with similar frequency are typically more accurate than between words with very different frequency counts.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weight Functions for the Alignment", |
|
"sec_num": "2.2.2." |
|
}, |
|
{ |
|
"text": ") # , max(# 2 # # WordsB WordsA WordsB WordsA k \u22c5 +", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weight Functions for the Alignment", |
|
"sec_num": "2.2.2." |
|
}, |
|
{ |
|
"text": "Each function is parameterized and its parameters are estimated on a validation set. The Weight Position Factor gives the far best improvement for the alignment quality, compared to a manually alignment. It reduces the alignment error by 13.1% while the other weight functions give an improvement from 1.5% to 3%. A combination of all four alignment functions reduces the error by 14.6%.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weight Functions for the Alignment", |
|
"sec_num": "2.2.2." |
|
}, |
|
{ |
|
"text": "Besides the four weight functions many other functions are imaginable and can be examined.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Weight Functions for the Alignment", |
|
"sec_num": "2.2.2." |
|
}, |
|
{ |
|
"text": "On the basis of the phrase alignment, optional dictionaries and the semantic and morphologic knowledge translation rules are generated. Optimal rules should be accurate (not introducing errors in other translation contexts) and should not be too specific, so that they can be applied frequently. Rules are of the form:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule Generation and Selection", |
|
"sec_num": "2.2.3." |
|
}, |
|
{ |
|
"text": "Cond1 | Cond2 | \u2026 \u00a6 Templ1 | Templ2 | \u2026", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule Generation and Selection", |
|
"sec_num": "2.2.3." |
|
}, |
|
{ |
|
"text": "where Cond can be a word or phrase containing attribute classes and Templ is a template which has to be instantiated during the translation process. Both Cond and Templ carry probabilities. Most attribute classes are part of a hierarchy. This allows enforcing a match by walking up the tree to a more common representation while at the same time decreasing the rule score. A set of meta-rules controls the construction process. Every time a translation rule contradicts with the training data, the rule is split and new attributes are added to resolve the error.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule Generation and Selection", |
|
"sec_num": "2.2.3." |
|
}, |
|
{ |
|
"text": "In order not to get too many rules, each rule is checked for its efficiency on a validation set.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Rule Generation and Selection", |
|
"sec_num": "2.2.3." |
|
}, |
|
{ |
|
"text": "The translation process tries to match and instantiate rules along the input utterance. This results in a search tree which needs to be pruned if it grows too large in size. A beamsearch then gets the best hypothesis weighted by a trigram language model. Both directions, to and from the interlingua, are very similar, which is shown in the following simple example. In each direction explicit language knowledge is only used for the English part. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Translation Process", |
|
"sec_num": "2.2.4." |
|
}, |
|
{ |
|
"text": "# ! \u00a8 \u00a9 \" <1> <2> 0,7 catch <VB> -# \u00a2 \u00a1 0.4, \u00a3 0.3, 0.1 \u2026 catch <NN> # \u00a4 \u00a2 \u00a5 0.1, \u2026 cold <Temperature attribute> # \u00a6 0.4 \u00a7 \u00a9 0.4 cold <Disease> # & 1.0", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Translation Process", |
|
"sec_num": "2.2.4." |
|
}, |
|
{ |
|
"text": "Instantiation of the first rule: =>", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Translation Process", |
|
"sec_num": "2.2.4." |
|
}, |
|
{ |
|
"text": "! \u00a8 \u00a9 \u00a2 \u00a1 &", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Translation Process", |
|
"sec_num": "2.2.4." |
|
}, |
|
{ |
|
"text": "To evaluate the concept of English as an interlingua we chose Chinese as input language and Spanish as output language, since, in spite of the widespread use of these languages, comparatively few direct Chinese-Spanish translations are available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "We trained a number of translation systems that translate directly from Chinese to English, English to Spanish, and Chinese to Spanish, respectively, and compared the results from the direct Chinese to Spanish systems with two combined approaches that use English as a intermediate language: First, we simply cascaded the Chinese-English and English-Spanish systems, feeding the output of the former into the latter ones. We then translated the same test set using the full EDTRL system's definition of an augmented, formalized version of English as an interlingua. For further comparison, the direct and cascaded translation steps were also done with Systran's publicly available online machine translation system [Systran2004] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 715, |
|
"end": 728, |
|
"text": "[Systran2004]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The data for these experiments were taken from the Basic Travel Expression Corpus (BTEC), a multilingual collection of conversational phrases in the travel domain [Takezawa2002] . The Chinese-English system was trained on 162316 parallel phrases. As only a subset of 6027 phrases was available in Spanish, only the corresponding parallel phrases were used to train the English-Spanish and Chinese-Spanish systems. The test set consisted of 506 new sentences created for the 2003 CSTAR evaluation campaign, and the scores were calculated using 16 English and in average of 3-4 Spanish reference translations. We report the NIST score using the mteval script [MTeval2002] 3.69 - Table 1 : Results (NIST-Score)", |
|
"cite_spans": [ |
|
{ |
|
"start": 163, |
|
"end": 177, |
|
"text": "[Takezawa2002]", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 657, |
|
"end": 669, |
|
"text": "[MTeval2002]", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 677, |
|
"end": 684, |
|
"text": "Table 1", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The higher scores of the statistical systems on Chinese to English, compared to translations to Spanish, mainly reflect the facts that a much larger amount of training material was used and that the evaluation was performed with a higher number of references. Surprisingly, the cascaded EDTRL systems resulted in better performance than a directly trained system. This effect is caused by the fact that the EDTRL system uses dictionaries for Chinese-English and English-Spanish, while for Chinese-Spanish no dictionary is available.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "Using augmented and formalized English (EIL) as an interlingua in the EDTRL system is shown to yield improvements over the pure cascaded translation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "The results of a slightly improved system for the Chinese-English unrestricted track of the IWSLT 2004 evaluation are given below. The subjective scores are the average of the medians of the three grades assigned to each translation. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experiments", |
|
"sec_num": "3." |
|
}, |
|
{ |
|
"text": "This work was partially supported by the European Union under the integrated project TC-STAR, IST-2002-2.3.1.6.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": "4." |
|
}, |
|
{ |
|
"text": "& # cold <Disease> 0.3 | rheum <Body Substance> 0.2 | to catch cold <VB,Change> 0.4 Instantiation of the first rule: => I think I've caught a cold from someone", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Association for Computational Linguistics [MTeval2002] NIST MT evaluation kit version 11", |
|
"authors": [ |
|
{ |
|
"first": "Eric", |
|
"middle": [], |
|
"last": "Brill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Case", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1995, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eric Brill, A Case Study in Part of Speech Tagging, 1995 Association for Computational Linguistics [MTeval2002] NIST MT evaluation kit version 11. http://www.nist.gov/speech/tests/mt/.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world", |
|
"authors": [ |
|
{ |
|
"first": "Toshiyuki", |
|
"middle": [], |
|
"last": "Takezawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Eiichiro", |
|
"middle": [], |
|
"last": "Sumita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fumiaki", |
|
"middle": [], |
|
"last": "Sugaya", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hirofumi", |
|
"middle": [], |
|
"last": "Yamamoto", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Seiichi", |
|
"middle": [], |
|
"last": "Yamamoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Proc. of LREC 2002", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "147--152", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Toshiyuki Takezawa, Eiichiro Sumita, Fumiaki Sugaya, Hirofumi Yamamoto, and Seiichi Yamamoto. Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world. Proc. of LREC 2002, pp. 147-152, Las Palmas, Canary Islands, Spain, May 2002.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF2": { |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">Systems</td><td/><td>EDTRL</td><td>Systran</td></tr><tr><td>C</td><td>E</td><td/><td>7.34</td><td>5.74</td></tr><tr><td>E</td><td>S</td><td/><td>5.17</td><td>6.06</td></tr><tr><td>C</td><td>S</td><td/><td>3.17</td><td>-</td></tr><tr><td>C</td><td>E</td><td>S</td><td>3.41</td><td>2.84</td></tr><tr><td>C</td><td>EIL</td><td>S</td><td/><td/></tr></table>", |
|
"text": "in version 11.", |
|
"html": null, |
|
"type_str": "table" |
|
}, |
|
"TABREF4": { |
|
"num": null, |
|
"content": "<table/>", |
|
"text": "", |
|
"html": null, |
|
"type_str": "table" |
|
} |
|
} |
|
} |
|
} |