File size: 48,659 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
{
    "paper_id": "2004",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T07:22:00.635980Z"
    },
    "title": "TALP: Xgram-based Spoken Language Translation System",
    "authors": [
        {
            "first": "Adri\u00e0",
            "middle": [],
            "last": "De Gispert",
            "suffix": "",
            "affiliation": {},
            "email": ""
        },
        {
            "first": "Jos\u00e9",
            "middle": [
                "B"
            ],
            "last": "Mari\u00f1o",
            "suffix": "",
            "affiliation": {},
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "This paper introduces TALP, a speech-to-speech statistical machine translation system developed at the TALP Research Center (Barcelona, Spain). TALP generates translations by searching for the best scoring path through a Finite-State Transducers (FSTs), which models an Xgram of the bilingual language defined by tuples. A detailed description of the system and the core processes to train it from a parallel corpus are presented. Results on the Chinese-English supplied task of the Int. Workshop on Spoken Language Translation (IWSLT'04) Evaluation Campaign are shown and discussed.",
    "pdf_parse": {
        "paper_id": "2004",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "This paper introduces TALP, a speech-to-speech statistical machine translation system developed at the TALP Research Center (Barcelona, Spain). TALP generates translations by searching for the best scoring path through a Finite-State Transducers (FSTs), which models an Xgram of the bilingual language defined by tuples. A detailed description of the system and the core processes to train it from a parallel corpus are presented. Results on the Chinese-English supplied task of the Int. Workshop on Spoken Language Translation (IWSLT'04) Evaluation Campaign are shown and discussed.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "TALP (Traducci\u00f3 Autom\u00e0tica del Llenguatge Parlat) is a speech-to-speech statistical machine translation system developed at the TALP Research Center (Barcelona, Spain) during the last years. It implements an integrated architecture by joining speech recognition and translation in one single step. Mathematically, the system produces a translation by maximizing the joint probability between source and target languages, which is equivalent to a language model of an special language with bilingual units (called tuples). TALP implements this tuple language model by means of a Finite-State Transducer (FST) considering an Xgram memory, that is, a variablelength N-gram model which adapts its length to evidence in the data. Xgrams have proved good results in speech recognition tasks in the past [1] .",
                "cite_spans": [
                    {
                        "start": 797,
                        "end": 800,
                        "text": "[1]",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of the system",
                "sec_num": "1."
            },
            {
                "text": "Given such a bilingual FST, the search for a translation becomes the search for the best-scoring path among the transducer's edges. This search can be performed by dynamic programming, using well-known decoding techniques from the speech recognition domain. This way, the Viterbi algorithm and a beam search can be used forwards taking only source-language words into account (first part of each tuple), reading words in the target language during trace-back to produce the translation. Using Figure 1 : A translation FST from Spanish to English the same structure and search method, acoustic models can be omitted to perform text translation tasks only.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 493,
                        "end": 501,
                        "text": "Figure 1",
                        "ref_id": "FIGREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Overview of the system",
                "sec_num": "1."
            },
            {
                "text": "This translation FST is learned automatically from a parallel corpus in three main steps (and an optional preprocessing). First, an automatic word alignment is produced. Currently this is done by the freely-available GIZA++ software [2] , implementing well-known IBM and HMM translation models [3, 4] . From this alignment, a tuple extraction algorithm generates the set tuples that induces a sequential segmentation of both source and target sentences. These tuples must respect word order in both languages, as this is necessary for the transducer to produce a correct-order translated output. Finally, Xgrams are learned using standard language modeling techniques. Previous publications on this system include [5] and [6] .",
                "cite_spans": [
                    {
                        "start": 233,
                        "end": 236,
                        "text": "[2]",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 294,
                        "end": 297,
                        "text": "[3,",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 298,
                        "end": 300,
                        "text": "4]",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 714,
                        "end": 717,
                        "text": "[5]",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 722,
                        "end": 725,
                        "text": "[6]",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of the system",
                "sec_num": "1."
            },
            {
                "text": "The organization of the paper is as follows. Section 2 offers an overview of the system architecture, whereas sections 2 and 3 deepen into details on translation generation and training issues. Section 4 presents the experimental framework used to evaluate the system, whose results are discussed in section 5. Finally, section 6 concludes and outlines future research lines.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Overview of the system",
                "sec_num": "1."
            },
            {
                "text": "Statistical machine translation is based on the assumption that every sentence e in the target language is a possible translation of a given sentence f in the source language. The main difference between two possible translations of a given sentence is a probability assigned to each, which is to be learned from a bilingual text corpus. This probability can be modeled by a joint probability model of source and target languages. In this case, solving the translation problem is finding the sentence in the target language that maximises equation 1. This probability can be approximated by an Xgram of a joint or bilingual language model, learned from a set of tuples, as expressed in equation 2. (2) where:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Translation generation",
                "sec_num": "2."
            },
            {
                "text": "(e, f ) n = (e in ...e in+In , t jn ...t jn+Jn )",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Translation generation",
                "sec_num": "2."
            },
            {
                "text": "TALP implements this Xgram language model by means of finite-state transducer whose edges are labelled with tuples (as shown in figure 1). That is, each edge has a label that relates one or more words in the sourcelanguage to zero, one or more words in the target language. This way, some edges may have just one word in the source language whereas others may have more, and both be valid as long as the first word is equal to the input. Bearing this in mind, all well-known ASR decoding techniques can be used to find the best-scoring translation of a given sentence, once the transducer is built.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Translation generation",
                "sec_num": "2."
            },
            {
                "text": "In a speech-to-speech translation framework, input data is the speech signal, so the objective of translation becomes the search for the sentence e in the target language the following equation:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Translation generation",
                "sec_num": "2."
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "e = arg max e {p(e, f ) \u2022 p(x|f )}",
                        "eq_num": "(3)"
                    }
                ],
                "section": "Translation generation",
                "sec_num": "2."
            },
            {
                "text": "where we introduce the acoustic model p(x|f ) in the optimisation (x being the input acoustic signal). Therefore, by following this transducer-based approach, the same training and search techniques can be used to tackle both text and speech translation tasks. The following section goes into the details on how the FST is learned from a parallel textual corpus. The current architecture of the TALP translator performs the search for the best translation in a monotonous fashion. Any reordering of the target words is restricted to the short region defined by the tuple. That is, it can only be produced inside a tuple, which can contain crossed alignment relationships. This poses a strong limitation to the system, specially when dealing with pairs of languages with long reorderings in word alignment, such as Chinese and English. Several reordering techniques have been tested with the FST architecture, none of them providing significant results (for Spanish-English case, see [6] ). ",
                "cite_spans": [
                    {
                        "start": 983,
                        "end": 986,
                        "text": "[6]",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Translation generation",
                "sec_num": "2."
            },
            {
                "text": "Usual language model techniques can be used to learn the tuples language model (X-gram) once a given parallel corpus has been transformed into a set of tuples for each sentence. In order to do so, the training of the system comprises three basic stages (and an optional preprocessing), which are shown in the flow diagram of figure 2. These steps are described in the following subsections.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Training",
                "sec_num": "3."
            },
            {
                "text": "The pre-processing stage is aimed at categorising words in order to reduce output vocabulary, helping the alignment stage to increase accuracy, without reducing input flexibility. Some basic word groups can be categorised, namely personal names, names of cities, towns or countries (manually), and dates, times of the day and numbers (automatically). With the X-gram software, these categories or word groups can be easily modelled by smaller finite-state transducers that translate each of their possible alternative values.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preprocessing",
                "sec_num": "3.1."
            },
            {
                "text": "However, this preprocessing is an optional and language-dependent stage, according to the availability of resources. In the frame of a Chinese-English translation task, only a small preprocessing has been performed. As evaluation is performed without punctuation marks, we experimented training without punctuation, but this was discarded as results were equal or worse than leaving punctuation until a final output post-processing.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preprocessing",
                "sec_num": "3.1."
            },
            {
                "text": "On the other hand, a special segmentation of the training corpus was performed. Whenever a pair of Chinese-English sentences shared the same number and type of punctuation marks (considering '.' ',' and '?'), these were split according to the position of punctuation. That causes the train corpus to have more and shorter sentences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Preprocessing",
                "sec_num": "3.1."
            },
            {
                "text": "Assuming that the input parallel text in sentence aligned, we perform a standard statistical word alignment stage by using GIZA++, a freely-available software which implements the so-called IBM alignment models presented in [3] as well as the HMM-based alignment model [4] , producing the Viterbi alignment as an approximation to the most probable one. Due to the asymmetric nature of the resulting alignment (linking one word in the source language to one or more words in the target language), several symmetrization strategies can be used (such as the union or the intersection between alignments in both directions).",
                "cite_spans": [
                    {
                        "start": 224,
                        "end": 227,
                        "text": "[3]",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 269,
                        "end": 272,
                        "text": "[4]",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word alignment",
                "sec_num": "3.2."
            },
            {
                "text": "In our case, both the union and the intersection are performed and can be also used to generate the set of tuples, like the source-to-target (s2t) and target-to-source (t2s) alignments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Word alignment",
                "sec_num": "3.2."
            },
            {
                "text": "Once the alignment is produced, the tuples extraction unit has to build units so that the order of the sentence in both languages is not violated, a necessary requirement when dealing with finite-state translation transducers, as already exposed in [7] , because otherwise the transducer would learn order-incorrect sentences. Given a sentence pair and a corresponding word alignment, the sequential set of tuples contains those pairs of m source words and n target words satisfying these constraints: 4. Each tuple cannot be decomposed into smaller phrases without violating the previous constraint.",
                "cite_spans": [
                    {
                        "start": 249,
                        "end": 252,
                        "text": "[7]",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Tuples extraction",
                "sec_num": "3.3."
            },
            {
                "text": "Note that this set is unique under these conditions [8] . The only ambiguity appears when a target word is aligned to NULL, in which case we append it to the next tuple (if exists, else to the previous). An example of the tuple extraction process is drawn in figure 3 .",
                "cite_spans": [
                    {
                        "start": 52,
                        "end": 55,
                        "text": "[8]",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 259,
                        "end": 267,
                        "text": "figure 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Tuples extraction",
                "sec_num": "3.3."
            },
            {
                "text": "When extracting tuples with more than one word in each language (as the third tuple in figure 3 ), a certain local reordering of the target is necessarily encoded. While helping the system to avoid local reordering mistakes, this strategy can suffer from an information loss, as the source words appearing in this tuple may not have any Figure 3 : Tuples extraction from an aligned sentence pair translation if they do not appear elsewhere alone in a tuple. We call these words embedded, as their translation appears only embedded in a longer phrase.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 87,
                        "end": 95,
                        "text": "figure 3",
                        "ref_id": null
                    },
                    {
                        "start": 337,
                        "end": 345,
                        "text": "Figure 3",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Tuples extraction",
                "sec_num": "3.3."
            },
            {
                "text": "To avoid this, we build up a dictionary of translations for embedded words from the most accurate word alignment available. For a certain embedded word f j and a given word alignment, we look for the target words e i ...e i+K that are most frequently aligned to f j with these two conditions: 1. Target words e i ...e i+K are consecutive in the target sentence.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Tuples extraction",
                "sec_num": "3.3."
            },
            {
                "text": "2. Target words are aligned only to f j or to null.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Tuples extraction",
                "sec_num": "3.3."
            },
            {
                "text": "This way, we build up a statistical dictionary independently of the non-monotonicity of the word alignment. The entries of the dictionary are used as unigrams in the bilingual model estimated by the FST. To create the dictionary, all four aforementioned word alignments have been tested for several translation tasks, and the intersection has consistently given better results, even though its translations are always one-word.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Tuples extraction",
                "sec_num": "3.3."
            },
            {
                "text": "This strategy is useful though not robust enough yet. By building up the dictionary, we are able to produce a word-by-word translations for some embedded words whenever the sequence in the test sentence is not equal to any training tuple. However, information on embedded N-grams is not extracted at the moment. This has growing importance when dealing with very different pairs of languages, in terms of word ordering, as with a Chinese-English task. In section 4.3 the impact of this technique is evaluated in practice.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Tuples extraction",
                "sec_num": "3.3."
            },
            {
                "text": "Finally, given the parallel corpus described in a set of tuples for each sentence, a Finite-State Transducer containing Xgram probabilities is learned. Usually, a maximum length of 3 is used for memory, to avoid over-fitting to training data. A back-off strategy is follow and a pruning of the resulting automaton can be performed. Two parameters are used for this: on the one hand, the minimum number of times a certain history (Xgram) must occur to be considered. And on the other hand, two different nodes sharing the same recent history are merged if the divergence between their output probability distributions is smaller than a certain threshold (see details in [1] ). Given the usual sparseness problems when dealing with parallel corpora, the first parameter is not used (set to 1), whereas the latter (hereafter referred to 'f') performs a slight pruning.",
                "cite_spans": [
                    {
                        "start": 669,
                        "end": 672,
                        "text": "[1]",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Xgram estimation",
                "sec_num": "3.4."
            },
            {
                "text": "The presented system has been evaluated in the framework of the International Workshop on Spoken Language Translation (IWSLT'04), a Satellite Workshop of the Interspeech -ICSLP. In the workshop, an Evaluation Campaign has been conducted for two translation directions, namely Chinese-to-English and Japanese-to-English. Moreover, two different tracks per direction have been proposed, namely using only the supplied corpus (supplied) and allowing the use of any additional data for training purposes (unrestricted). Besides, an intermediate track allowing the use of the supplied corpus plus certain linguistic resources available from LDC has been proposed for the Chinese-English task.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment and results",
                "sec_num": "4."
            },
            {
                "text": "TALP has participated only in the Chinese-to-English supplied track, the reason being that we believe the Japanese-to-English task to be even more demanding in terms of reordering. As our system lacks any direct treatment of long reorderings, we found that the Chinese-to-English task brought up enough challenges for research. Next, we present a brief description of the supplied corpus, the evaluation measures used in the track and the results achieved by two different TALP runs. Table 1 shows the main statistics of the supplied data, namely number of sentences, words, vocabulary, and maximum and average sentence lengths for each language, respectively. The difference between 'Train set' and 'Segmented train set' is the segmentation discussed in section 3.1. A development set of 506 sentences was also supplied, together with 16 reference English translations. There are 160 unseen words in the development set and 104 unseen words in the test set.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 484,
                        "end": 491,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Experiment and results",
                "sec_num": "4."
            },
            {
                "text": "The output of the system is evaluated using automatic and manual evaluation measures. For the automatic evaluation, 16 man-made English reference translations of the test corpus are used. The evaluation measures include BLEU score, NIST score, mWER, mPER and GTM (general text matcher).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation measures",
                "sec_num": "4.2."
            },
            {
                "text": "As for human assessment, each translated sentence is evaluated by three human judges, according to \"fluency\" and \"adequacy\" of the translation. While fluency indicates how the evaluation segment sounds to a native speaker of English, from 'Incomprehensible' (1) 'Flawless English' (5), adequacy judges how much of the information is carried by the translation, from 'None of it' (1) to 'All of the information' (5).",
                "cite_spans": [
                    {
                        "start": 258,
                        "end": 261,
                        "text": "(1)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation measures",
                "sec_num": "4.2."
            },
            {
                "text": "Several different configurations were tested for the development set. Their results are shown in Table 2 , where 'aU' and 'a2' refer to using the union and the s2t alignment, respectively. The term 'seg' refers to training with the segmented version of the corpus, whereas 'f' refers setting the pruning parameter to 0.2, instead of leaving it to 0 (see section 3.4). As about the effect of using a dictionary of embedded words (see section 3.3), an evaluation without it has been performed, leading to the results shown with term '-D'. In general, these results show a slight variation in performance for both alignments, but with a remarkable descent in terms of NIST score. All runs using the union alignment leave 7 sentences untranslated (empty), whereas runs using the s2t alignment leave 16, 18, 19 and 19 sentences each. As we can see, the greatest difference between all the results lies in the original word alignment used to extract the bilingual tuples. The segmented version of the corpus provides a slight but consistent improvement, helping the training to produce more accurate alignments and shorter tuples. As about pruning, it seems that this technique does not make much of a change, but it turns the algorithm a bit more efficient.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 97,
                        "end": 104,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Development work",
                "sec_num": "4.3."
            },
            {
                "text": "For the reasons presented above, we have selected configurations 'aU,seg,f' (run A) and 'a2,seg,f' (run B) as first-best and second-best for the test set. The difference in their original alignment makes a big difference in the final translation transducer, as we can see in the statistics shown in Table 3 , where the total number of tuples, tuples vocabulary size, average tuples length (adding source and target words) and number of embedded words are shown. Usually, the union alignment leads to much longer tuples, which in turn increases the number of embedded words, whose translation is 'solved' by the dictionary built up with the intersection. On the contrary, using the s2t alignment we increase the number of total tuples (by decreasing their length), reducing at the same time the number of embedded words. However, we appreciate an important increase in the percentage of tuples translating to NULL (up to 28%, in contrast to 7.5% for the union), an undesirable consequence of following the asymmetric alignment. This could be avoided by taking a hard decision as to where to align these tuples (whether to the previous or the next tuple), but we do not believe this to be of much gain compared to using the union alignment. Finally, many of the new unigrams that are created through the dictionary when using the union alignment, already exist in the FST using the s2t alignment, but they are linked to NULL, which is inappropriate when no history can help in decoding. Run A has produced no output in 5/500 sentences. Run B has produced no output in 11/500 sentences. Results show a surprising behaviour: while NIST score, PER and GTM clearly prefer run A, the BLEU metric gives a much better score to run B, being the WER practically identical in both cases. All in all, we believe run A to be slightly better and more consistent with human translation, being more based on a phrase translation approach. It seems that BLEU does not seem to penalise the 'shortening' effect of run B output. In fact, the average output sentence length for run A is 6.01 words, whereas for run B is only 5.18, a clear consequence of the high percentage of tuples translating to NULL. Table 5 presents the TALP results of the manual evaluation for run A. As expected given the lack of a reordering scheme in the statistical machine translator proposed, the fluency score does not even achieve a '3', meaning 'Non-native English'. However, the adequacy score is quite good as it means the 'Much of the information' is being translated in the output. run fluency adequacy A 2.792 3.022 Table 5 : Manual evaluation results for run A",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 299,
                        "end": 306,
                        "text": "Table 3",
                        "ref_id": "TABREF4"
                    },
                    {
                        "start": 2183,
                        "end": 2190,
                        "text": "Table 5",
                        "ref_id": null
                    },
                    {
                        "start": 2582,
                        "end": 2589,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Test set results",
                "sec_num": "4.4."
            },
            {
                "text": "Among the various configurations tested, the biggest difference lies in the word alignment used to extract tuples. However, all of them share a very important limitation of the current architecture. This refers to word reordering, which is strictly limited to the local reordering inside a tuple, making the approach inappropriate for pairs of languages with a very different word order. In the Chinese-English case, the system is unable to perform long reorderings, which leads to an important loss in the fluency of the output translation. On the other hand, the addition of a dictionary when using the union alignment ensures that most of content words are translated, assuring that 'most of the information' is included. This is a typical problem of statistical machine translation systems, which tend to make stupid syntactic or morphological mistakes while still providing a 'fair' message translation. Some examples of translation and one reference for the development set are shown in Table 6 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 993,
                        "end": 1000,
                        "text": "Table 6",
                        "ref_id": "TABREF7"
                    }
                ],
                "eq_spans": [],
                "section": "Analysis and discussion",
                "sec_num": "5."
            },
            {
                "text": "Reference: what time does it start ? Translation: stomach very hurts .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Translation: that what time start ?",
                "sec_num": null
            },
            {
                "text": "Reference: i have a severe pain in my stomach . Finally, we would like to point out the seemingly inconsistent results of automatic evaluation measures, which demand further research towards finding more robust ways to measure translation performance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Translation: that what time start ?",
                "sec_num": null
            },
            {
                "text": "The statistical machine translation TALP system has been presented in detail. Description and training details from a parallel corpus have been shown. An evaluation in the framework of Chinese-English supplied task of the IWSLT'04 workshop has been performed. Results have been discussed, addressing the limitations of the system that are highlighted by this challenging translation task.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and further work",
                "sec_num": "6."
            },
            {
                "text": "Future work to improve the system should necessarily tackle the problem of embedded N-grams. One way of treating them would be to extract their translation in a dictionary as it is currently done with embedded words. This would lead to a phrase-based-like approach, but with a reduced set of phrases compared to current approaches.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and further work",
                "sec_num": "6."
            },
            {
                "text": "Moreover, a generalization of the extracted tuples is necessary, for example using classification algorithms or clustering. This could give the system the power of translation unseen tuples adequately, by using the context of 'similar' seen tuples.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and further work",
                "sec_num": "6."
            },
            {
                "text": "And last but not least, techniques to overcome the reordering limitation must be researched, even if that means some big structural change in the translation model based on FST, which proves currently inadequate for pairs of language with different word ordering.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and further work",
                "sec_num": "6."
            }
        ],
        "back_matter": [
            {
                "text": "The authors want to thank Josep Maria Crego and Jos\u00e9 A. R. Fonollosa (members of the TALP Research Center) for their contribution to this work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": "7."
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Language modeling using X-grams",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Bonafonte",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Mari\u00f1o",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Proc. of the 4th Int. Conf. on Spoken Language Processing, ICSLP'96",
                "volume": "",
                "issue": "",
                "pages": "394--397",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. Bonafonte and J. Mari\u00f1o, \"Language modeling using X-grams,\" Proc. of the 4th Int. Conf. on Spo- ken Language Processing, ICSLP'96, pp. 394-397, October 1996.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Giza++ software",
                "authors": [],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Giza++ software, \"http://www-i6.informatik.rwth- aachen.de/\u02dcoch/software/giza++.html.\"",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "The mathematics of statistical machine translation",
                "authors": [
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Brown",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [
                            "Della"
                        ],
                        "last": "Pietra",
                        "suffix": ""
                    },
                    {
                        "first": "V",
                        "middle": [
                            "Della"
                        ],
                        "last": "Pietra",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Mercer",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "Computational Linguistics",
                "volume": "19",
                "issue": "2",
                "pages": "263--311",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "P. Brown, S. Della Pietra, V. Della Pietra, and R. Mer- cer, \"The mathematics of statistical machine transla- tion,\" Computational Linguistics, vol. 19, no. 2, pp. 263-311, 1993.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Hmm-based word alignment in statistical translation",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Vogel",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Ney",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Tillmann",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "Proc. of the Int. Conf. on Computational Linguistics, COL-ING'96",
                "volume": "",
                "issue": "",
                "pages": "836--841",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "S. Vogel, H. Ney, and C. Tillmann, \"Hmm-based word alignment in statistical translation,\" Proc. of the Int. Conf. on Computational Linguistics, COL- ING'96, pp. 836-841, August 1996.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Using X-grams for speech-to-speech translation",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "De Gispert",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Mari\u00f1o",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proc. of the 7th Int. Conf. on Spoken Language Processing",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. de Gispert and J. Mari\u00f1o, \"Using X-grams for speech-to-speech translation,\" Proc. of the 7th Int. Conf. on Spoken Language Processing, ICSLP'02, September 2002.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Experiments in word-ordering and morphological preprocessing for transducer-based statistical machine translation",
                "authors": [],
                "year": 2003,
                "venue": "IEEE Automatic Speech Recognition and Understanding Workhsop, ASRU'03",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "--, \"Experiments in word-ordering and morpho- logical preprocessing for transducer-based statistical machine translation,\" IEEE Automatic Speech Recog- nition and Understanding Workhsop, ASRU'03, November 2003.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Finite-state transducers for speech input translation",
                "authors": [
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Casacuberta",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "IEEE Automatic Speech Recognition and Understanding Workhsop, ASRU'01",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "F. Casacuberta, \"Finite-state transducers for speech input translation,\" IEEE Automatic Speech Recog- nition and Understanding Workhsop, ASRU'01, De- cember 2001.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Finite-statebased and phrase-based statistical machine translation",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Crego",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Mari\u00f1o",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "De Gispert",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proc. of the 8th Int. Conf. on Spoken Language Processing",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. Crego, J. Mari\u00f1o, and A. de Gispert, \"Finite-state- based and phrase-based statistical machine transla- tion,\" Proc. of the 8th Int. Conf. on Spoken Language Processing, ICSLP'04, October 2004.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "text": "e, f ) n |(e, f ) n\u22121 , ..., (e, f ) n\u2212X+1 )}",
                "uris": null,
                "type_str": "figure",
                "num": null
            },
            "FIGREF2": {
                "text": "Training stages from a parallel corpus to a translation FST",
                "uris": null,
                "type_str": "figure",
                "num": null
            },
            "FIGREF3": {
                "text": "It induces a monotonous segmentation of the pair of sentences. 2. Words are consecutive along both source and target sides of the tuple. 3. No word on either side of the tuple is aligned to a word out of the tuple.",
                "uris": null,
                "type_str": "figure",
                "num": null
            },
            "FIGREF4": {
                "text": "runs BLEU NIST WER PER GTM aU 0.244 5.169 0.615 0.529 0.591 aU,seg 0.251 5.187 0.607 0.521 0.595 aU,seg,f 0.255 5.210 0.603 0.518 0.594 aU,seg,-D 0.264 4.741 0.606 0.524 0.592 a2 0.319 3.789 0.614 0.552 0.573 a2,seg 0.318 3.871 0.606 0.546 0.573 a2,seg,f 0.314 3.678 0.607 0.548 0.570 a2,seg,-D 0.315 3.706 0.607 0.547 0.571",
                "uris": null,
                "type_str": "figure",
                "num": null
            },
            "TABREF2": {
                "num": null,
                "html": null,
                "content": "<table/>",
                "text": "Automatic evaluation results (development set)",
                "type_str": "table"
            },
            "TABREF4": {
                "num": null,
                "html": null,
                "content": "<table/>",
                "text": "Statistics of two different runs",
                "type_str": "table"
            },
            "TABREF5": {
                "num": null,
                "html": null,
                "content": "<table><tr><td/><td>presents the results</td></tr><tr><td colspan=\"2\">obtained by these two runs evaluating against automatic</td></tr><tr><td>measures.</td><td/></tr><tr><td colspan=\"2\">runs BLEU NIST WER PER GTM</td></tr><tr><td>A</td><td>0.279 6.778 0.556 0.465 0.647</td></tr><tr><td>B</td><td>0.331 5.391 0.550 0.490 0.620</td></tr></table>",
                "text": "",
                "type_str": "table"
            },
            "TABREF6": {
                "num": null,
                "html": null,
                "content": "<table/>",
                "text": "Automatic evaluation results for two runs",
                "type_str": "table"
            },
            "TABREF7": {
                "num": null,
                "html": null,
                "content": "<table/>",
                "text": "Samples of translations and reference (dev set)",
                "type_str": "table"
            }
        }
    }
}