File size: 106,481 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
{
    "paper_id": "I17-1044",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T07:38:16.143855Z"
    },
    "title": "Local Monotonic Attention Mechanism for End-to-End Speech and Language Processing",
    "authors": [
        {
            "first": "Andros",
            "middle": [],
            "last": "Tjandra",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Nara Institute of Science and Technology",
                "location": {
                    "country": "Japan"
                }
            },
            "email": "andros.tjandra.ai6@is.naist.jp"
        },
        {
            "first": "Sakriani",
            "middle": [],
            "last": "Sakti",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Nara Institute of Science and Technology",
                "location": {
                    "country": "Japan"
                }
            },
            "email": "ssakti@is.naist.jp"
        },
        {
            "first": "Satoshi",
            "middle": [],
            "last": "Nakamura",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Nara Institute of Science and Technology",
                "location": {
                    "country": "Japan"
                }
            },
            "email": "s-nakamura@is.naist.jp"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "Recently, encoder-decoder neural networks have shown impressive performance on many sequence-related tasks. The architecture commonly uses an attentional mechanism which allows the model to learn alignments between the source and the target sequence. Most attentional mechanisms used today is based on a global attention property which requires a computation of a weighted summarization of the whole input sequence generated by encoder states. However, it is computationally expensive and often produces misalignment on the longer input sequence. Furthermore, it does not fit with monotonous or left-to-right nature in several tasks, such as automatic speech recognition (ASR), grapheme-to-phoneme (G2P), etc. In this paper, we propose a novel attention mechanism that has local and monotonic properties. Various ways to control those properties are also explored. Experimental results on ASR, G2P and machine translation between two languages with similar sentence structures, demonstrate that the proposed encoderdecoder model with local monotonic attention could achieve significant performance improvements and reduce the computational complexity in comparison with the one that used the standard global attention architecture.",
    "pdf_parse": {
        "paper_id": "I17-1044",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "Recently, encoder-decoder neural networks have shown impressive performance on many sequence-related tasks. The architecture commonly uses an attentional mechanism which allows the model to learn alignments between the source and the target sequence. Most attentional mechanisms used today is based on a global attention property which requires a computation of a weighted summarization of the whole input sequence generated by encoder states. However, it is computationally expensive and often produces misalignment on the longer input sequence. Furthermore, it does not fit with monotonous or left-to-right nature in several tasks, such as automatic speech recognition (ASR), grapheme-to-phoneme (G2P), etc. In this paper, we propose a novel attention mechanism that has local and monotonic properties. Various ways to control those properties are also explored. Experimental results on ASR, G2P and machine translation between two languages with similar sentence structures, demonstrate that the proposed encoderdecoder model with local monotonic attention could achieve significant performance improvements and reduce the computational complexity in comparison with the one that used the standard global attention architecture.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "End-to-end training is a newly emerging approach to sequence-to-sequence mapping tasks, that allows the model to directly learn the mapping between variable-length representation of different modalities (i.e., text-to-text sequence Sutskever et al., 2014) , speech-totext sequence (Chorowski et al., 2014; Chan et al., 2016) , image-to-text sequence (Xu et al., 2015) , etc).",
                "cite_spans": [
                    {
                        "start": 232,
                        "end": 255,
                        "text": "Sutskever et al., 2014)",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 281,
                        "end": 305,
                        "text": "(Chorowski et al., 2014;",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 306,
                        "end": 324,
                        "text": "Chan et al., 2016)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 350,
                        "end": 367,
                        "text": "(Xu et al., 2015)",
                        "ref_id": "BIBREF23"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "One popular approaches in the end-to-end mapping tasks of different modalities is based on encoder-decoder architecture. The earlier version of an encoder-decoder model is built with only two different components (Sutskever et al., 2014; Cho et al., 2014b) : (1) an encoder that processes the source sequence and encodes them into a fixedlength vector; and (2) a decoder that produces the target sequence based on information from fixedlength vector given by encoder. Both the encoder and decoder are jointly trained to maximize the probability of a correct target sequence given a source sequence. This architecture has been applied in many applications such as machine translation (Sutskever et al., 2014; Cho et al., 2014b) , image captioning (Karpathy and Fei-Fei, 2015) , and so on.",
                "cite_spans": [
                    {
                        "start": 213,
                        "end": 237,
                        "text": "(Sutskever et al., 2014;",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 238,
                        "end": 256,
                        "text": "Cho et al., 2014b)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 683,
                        "end": 707,
                        "text": "(Sutskever et al., 2014;",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 708,
                        "end": 726,
                        "text": "Cho et al., 2014b)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 746,
                        "end": 774,
                        "text": "(Karpathy and Fei-Fei, 2015)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "However, such architecture encounters difficulties, especially for coping with long sequences. Because in order to generate the correct target sequence, the decoder solely depends only on the last hidden state of the encoder. In other words, the network needs to compress all of the information contained in the source sequence into a single fixed-length vector. (Cho et al., 2014a) demonstrated a decrease in the performance of the encoder-decoder model associated with an increase in the length of the input sentence sequence. Therefore, introduced attention mechanism to address these issues. Instead of relying on a fixed-length vector, the decoder is assisted by the attention module to get the related context from the encoder sides, depends on the current decoder states.",
                "cite_spans": [
                    {
                        "start": 363,
                        "end": 382,
                        "text": "(Cho et al., 2014a)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Most attention-based encoder-decoder model used today has a \"global\" property Luong et al., 2015) . Every time the decoder needs to predict the output given the previous output, it must compute a weighted summarization of the whole input sequence generated by the encoder states. This global property allows the decoder to address any parts of the source sequence at each step of the output generation and provides advantages in some cases like machine translation tasks. Specifically, when the source and the target languages have different sentence structures and the last part of the target sequence may depend on the first part of the source sequence. However, although the global attention mechanism has often improved performance in some tasks, it is very computationally expensive. For a case that requires mapping between long sequences, misalignments might happen in standard attention mechanism (Kim et al., 2017) . Furthermore, it does not fit with monotonous or left-toright natures in several tasks, such as ASR, G2P, etc.",
                "cite_spans": [
                    {
                        "start": 78,
                        "end": 97,
                        "text": "Luong et al., 2015)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 905,
                        "end": 923,
                        "text": "(Kim et al., 2017)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this paper, we propose a novel attention module that has two important characteristics to address those problems: local and monotonicity properties. The local property helps our attention module focus on certain parts from the source sequence that the decoder wants to transcribe, and the monotonicity property strictly generates alignment left-to-right from beginning to the end of the source sequence. In case of speech recognition task that need to produces a transcription given the speech signal, the attention module is now able to focus on the audio's specific timing and always move in one direction from the start to the end of the audio. Similar way can be applied also for G2P or machine translation (MT) between two languages with similar sentences structure, i.e., Subject-Verb-Object (SVO) word order in English and French languages. Experimental results demonstrate that the proposed encoder-decoder model with local monotonic attention could achieve significant performance improvements and reduce the computational complexity in comparison with the one that used the standard global attention architecture.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The encoder-decoder model is a neural network that directly models conditional probability",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Attention-based Encoder Decoder Neural Network",
                "sec_num": "2"
            },
            {
                "text": "Figure 1: Attention-based encoder-decoder archi- tecture. p(y|x), where x = [x 1 , ..., x S ]",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Attention-based Encoder Decoder Neural Network",
                "sec_num": "2"
            },
            {
                "text": "is the source sequence with length S and y = [y 1 , ..., y T ] is the target sequence with length T . Figure 1 shows the overall structure of the attention-based encoderdecoder model that consists of encoder, decoder and attention modules.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 102,
                        "end": 110,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Attention-based Encoder Decoder Neural Network",
                "sec_num": "2"
            },
            {
                "text": "The encoder task processes input sequence x and outputs representative information h e = [h e 1 , ..., h e S ] for the decoder. The attention module is an extension scheme for assisting the decoder to find relevant information on the encoder side based on the current decoder hidden states Luong et al., 2015) . Usually, attention modules produces context information c t at the time t based on the encoder and decoder hidden states:",
                "cite_spans": [
                    {
                        "start": 290,
                        "end": 309,
                        "text": "Luong et al., 2015)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Attention-based Encoder Decoder Neural Network",
                "sec_num": "2"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "c t = S s=1 a t (s) * h e s (1) a t (s) = Align(h e s , h d t ) = exp(Score(h e s , h d t )) S s=1 exp(Score(h e s , h d t ))",
                        "eq_num": "(2)"
                    }
                ],
                "section": "Attention-based Encoder Decoder Neural Network",
                "sec_num": "2"
            },
            {
                "text": "There are several variations for score functions:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Attention-based Encoder Decoder Neural Network",
                "sec_num": "2"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "Score(h e s , h d t ) = \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 h e s , h d t , dot product h e s W s h d t , bilinear V s tanh(W s [h e s , h d t ]), MLP",
                        "eq_num": "(3)"
                    }
                ],
                "section": "Attention-based Encoder Decoder Neural Network",
                "sec_num": "2"
            },
            {
                "text": "where Score :",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Attention-based Encoder Decoder Neural Network",
                "sec_num": "2"
            },
            {
                "text": "(R M \u00d7 R N ) \u2192 R, M",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Attention-based Encoder Decoder Neural Network",
                "sec_num": "2"
            },
            {
                "text": "is the number of hidden units for encoder and N is the number of hidden units for decoder. Finally, the decoder task, which predicts the target sequence probability at time t based on previous output and context information c t can be formulated:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Attention-based Encoder Decoder Neural Network",
                "sec_num": "2"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "log p(y|x) = T t=1 log p(y t |y <t , c t )",
                        "eq_num": "(4)"
                    }
                ],
                "section": "Attention-based Encoder Decoder Neural Network",
                "sec_num": "2"
            },
            {
                "text": "For speech recognition task, most common input x is a sequence of feature vectors like Mel-spectral filterbank and/or MFCC. Therefore, x \u2208 R S\u00d7D where D is the number of features and S is the total frame length for an utterance. Output y, which is a speech transcription sequence, can be either phoneme or grapheme (character) sequence. In text-related task such as machine translation, x and y are a sequence of word or character indexes.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Attention-based Encoder Decoder Neural Network",
                "sec_num": "2"
            },
            {
                "text": "In the previous section, we explained the standard global attention-based encoder-decoder model. However, in order to control the area and focus attention given previous information, such mechanism requires to apply the scoring function into all the encoder states and normalizes them with a softmax function. Another problem is we cannot explicitly enforce the probability mass generated by the current attention modules that are always moving incrementally to the end of the source sequence. In this section, we discuss and explain how to model the locality and monotonicity properties on the attention module. This way, we could improve the sensitivity of capturing regularities and ensure to focus only an important subset instead of whole sequence. Figure 2 illustrates the overall mechanism of our proposed local monotonic attention, and details are described blow.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 754,
                        "end": 762,
                        "text": "Figure 2",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Locality and Monotonicity Properties",
                "sec_num": "3"
            },
            {
                "text": "Position First, we define how to predict the next central position of the alignment illustrated in . At time t, we want to decode the t-th target output given the source sequence, previous output y t\u22121 , and current decoder hidden states h d t \u2208 R N . In standard approaches, we use hidden states h d t to predict the position difference \u2206p t with a multilayer perceptron (MLP). We use variable \u2206p t to determine how far we should move the center of the alignment compared to previous center p t\u22121 .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Monotonicity-based Prediction of Central",
                "sec_num": "1."
            },
            {
                "text": "In this paper, we propose two different formulations for estimating \u2206p t to ensure a forward or monotonicity movement:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Monotonicity-based Prediction of Central",
                "sec_num": "1."
            },
            {
                "text": "\u2022 Constrained position prediction:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Monotonicity-based Prediction of Central",
                "sec_num": "1."
            },
            {
                "text": "We limit maximum range from \u2206p t with hyperparameter C max with the following equation:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Monotonicity-based Prediction of Central",
                "sec_num": "1."
            },
            {
                "text": "\u2206p t = C max * sigmoid(V p tanh(W p h d t )) (5)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Monotonicity-based Prediction of Central",
                "sec_num": "1."
            },
            {
                "text": "Here we can control how far our next center of alignment position p t relies on our datasets and guarantee 0 \u2264 \u2206p t \u2264 C max . However, it requires us to handle hyperparameter C max .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Monotonicity-based Prediction of Central",
                "sec_num": "1."
            },
            {
                "text": "\u2022 Unconstrained position prediction:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Monotonicity-based Prediction of Central",
                "sec_num": "1."
            },
            {
                "text": "Compared to a previous formulation, since we do not limit the maximum range of \u2206p t , here we can ignore hyperparameter C max and use exponential (exp) function instead of sigmoid. We can also use another function (e.g softplus) as long as the function satisfy f : R \u2192 R + 0 and the result of \u2206p t \u2265 0. We formulate unconstrained position prediction with following equation:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Monotonicity-based Prediction of Central",
                "sec_num": "1."
            },
            {
                "text": "\u2206p t = exp(V p tanh(W p h d t )) (6) Here V p \u2208 R K\u00d71 , W p \u2208 R K\u00d7N , N",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Monotonicity-based Prediction of Central",
                "sec_num": "1."
            },
            {
                "text": "is the number of decoder hidden units and K is the number of hidden projection layer units. We omit the bias for simplicity. Both equations guarantee monotonicity properties since \u2200t \u2208",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Monotonicity-based Prediction of Central",
                "sec_num": "1."
            },
            {
                "text": "[1..T ], p t \u2265 (p t\u22121 + \u2206p t ).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Monotonicity-based Prediction of Central",
                "sec_num": "1."
            },
            {
                "text": "Additionally, we also used scaling variable \u03bb t to scale the unnormalized Gaussian distribution that depends on h t . We calculated \u03bb t with following equation:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Monotonicity-based Prediction of Central",
                "sec_num": "1."
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "\u03bb t = exp(V \u03bb tanh(W p h d t ))",
                        "eq_num": "(7)"
                    }
                ],
                "section": "Monotonicity-based Prediction of Central",
                "sec_num": "1."
            },
            {
                "text": "where V \u03bb \u2208 R K\u00d71 . In our initial experiments, we discovered that we improved our model performance by scaling with \u03bb t for each time-step. The main objective of this step is to generate a scaled Gaussian distribution a N t :",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Monotonicity-based Prediction of Central",
                "sec_num": "1."
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "a N t (s) = \u03bb t * exp \u2212 (s \u2212 p t ) 2 2\u03c3 2 .",
                        "eq_num": "(8)"
                    }
                ],
                "section": "Monotonicity-based Prediction of Central",
                "sec_num": "1."
            },
            {
                "text": "where p t is the mean and \u03c3 is the standard deviation, both of which are used to calculate the weighted sum from the encoder states to generate context vector c t later. In this paper, we treat \u03c3 as a hyperparameter.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Monotonicity-based Prediction of Central",
                "sec_num": "1."
            },
            {
                "text": "After calculating new position p t , we generate locality-based alignment, as shown in Part (2) of Figure 2 . Based on predicted position p t , we follow (Luong et al., 2015) to generate alignment a S t only within [p t \u2212 2\u03c3, p t + 2\u03c3]:",
                "cite_spans": [
                    {
                        "start": 154,
                        "end": 174,
                        "text": "(Luong et al., 2015)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 99,
                        "end": 107,
                        "text": "Figure 2",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Locality-based Alignment Generation",
                "sec_num": "2."
            },
            {
                "text": "a S t (s) = Align(h e s , h d t ), (9) \u2200s \u2208 [p t \u2212 2\u03c3, p t + 2\u03c3].",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Locality-based Alignment Generation",
                "sec_num": "2."
            },
            {
                "text": "Since p t is a real number and the indexes for the encoder states are integers, we convert p t into an integer with floor operation. After we know the center of the position p t , we only need to calculate the scores (Eq. 3) for each encoder states in [p t \u22122\u03c3, .., p t +2\u03c3] then calculate the context alignment scores (Eq. 2).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Locality-based Alignment Generation",
                "sec_num": "2."
            },
            {
                "text": "Compared to the standard global attention, we can reduce the decoding computational complexity O(T * S) into O(T * \u03c3) where \u03c3 S and \u03c3 is constant, T is total decoding step, S is the length of the encoder states.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Locality-based Alignment Generation",
                "sec_num": "2."
            },
            {
                "text": "In the last step, we calculate context c t with alignments a N t and a S t , as shown in Part (3) of Figure 2 :",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 101,
                        "end": 109,
                        "text": "Figure 2",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Context Calculation",
                "sec_num": "3."
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "c t = (pt+2\u03c3) s=(pt\u22122\u03c3) a N t (s) * a S t (s) * h e s",
                        "eq_num": "(10)"
                    }
                ],
                "section": "Context Calculation",
                "sec_num": "3."
            },
            {
                "text": "Context c t and current hidden state h d t will later be utilized for calculating current output y t .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context Calculation",
                "sec_num": "3."
            },
            {
                "text": "Overall, we can rephrase the first step as generating \"prior\" probabilities a N t based on the previous p t\u22121 position and the current decoder states. Then the second step task generates \"likelihood\" probabilities a S t by measuring the relevance of our encoder states with the current decoder states. In the third step, we combine our \"prior\" and \"likelihood\" probability into an unnormalized \"posterior\" probability a t and calculate expected context c t .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Context Calculation",
                "sec_num": "3."
            },
            {
                "text": "We applied our proposed architecture on ASR task. The local property helps our attention module focus on certain parts from the speech that the decoder wants to transcribe, and the monotonicity property strictly generates alignment left-to-right from beginning to the end of the speech.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment on Speech Recognition",
                "sec_num": "4"
            },
            {
                "text": "We conducted our experiments on the TIMIT 1 (Garofolo et al., 1993) dataset with the same set-up for training, development, and test sets as defined in the Kaldi s5 recipe (Povey et al., 2011) . The training set contains 3696 sentences from 462 speakers. We also used another sets of 50 speakers for the development set and the test set contains 192 utterances, 8 each from 24 speakers. For every experiment, we used 40-dimensional fbank with delta and acceleration (total 120-dimension feature vector) extracted from the Kaldi toolkit. The input features were normalized by subtracting the mean and divided by the standard deviation from the training set. For our decoder target, we re-mapped the original target phoneme set from 61 into 39 phoneme class plus the end of sequence mark (eos).",
                "cite_spans": [
                    {
                        "start": 172,
                        "end": 192,
                        "text": "(Povey et al., 2011)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Speech Data",
                "sec_num": "4.1"
            },
            {
                "text": "On the encoder sides, we projected our input features with a linear layer with 512 hidden units followed by tanh activation function. We used three bidirectional LSTMs (Bi-LSTM) for our encoder with 256 hidden units for each LSTM (total 512 hidden units for Bi-LSTM). To reduce the computational time, we used hierarchical subsampling (Graves, 2012; Bahdanau et al., 2016) , applied it to the top two Bi-LSTM layers, and reduced their length by a factor of 4.",
                "cite_spans": [
                    {
                        "start": 335,
                        "end": 349,
                        "text": "(Graves, 2012;",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 350,
                        "end": 372,
                        "text": "Bahdanau et al., 2016)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model Architectures",
                "sec_num": "4.2"
            },
            {
                "text": "On the decoder sides, we used a 64dimensional embedding matrix to transform the input phonemes into a continuous vector, followed by two unidirectional LSTMs with 512 hidden units. For every local monotonic model, we used an MLP with 256 hidden units to generate \u2206p t and \u03bb t . Hyperparameter 2\u03c3 was set to 3, and C max for constrained position prediction (see Eq. 5) was set to 5. Both hyperparameters were empirically selected and generally gave consistent results across various settings in our proposed model. For our scorer module, we used bilinear and MLP scorers (see Eq 3) with 256 hidden units. We used an Adam (Kingma and Ba, 2014) optimizer with a learning rate of 5e \u2212 4.",
                "cite_spans": [
                    {
                        "start": 620,
                        "end": 641,
                        "text": "(Kingma and Ba, 2014)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model Architectures",
                "sec_num": "4.2"
            },
            {
                "text": "In the recognition phase, we generated transcriptions with best-1 (greedy) search from the decoder. We did not use any language model in this work. All of our models were implemented on the Chainer framework (Tokui et al., 2015) .",
                "cite_spans": [
                    {
                        "start": 208,
                        "end": 228,
                        "text": "(Tokui et al., 2015)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model Architectures",
                "sec_num": "4.2"
            },
            {
                "text": "For comparison, we evaluated our proposed model with the standard global attention-based encoder-decoder model and local-m attention (Luong et al., 2015) as the baseline. Most of the con-figurations follow the above descriptions, except the baseline model that does not have an MLP for generating \u2206p t and \u03bb t . Table 1 summarizes our experiments on our proposed local attention models and compares them to the baseline model using several possible scenarios.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 312,
                        "end": 319,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Model Architectures",
                "sec_num": "4.2"
            },
            {
                "text": "Considering the use of constrained and unconstrained position prediction \u2206p t , our results show that the model with the unconstrained position prediction (exp) model gives better results than one based on the constrained position prediction (sigmoid) model on both MLP and bilinear scorers. We conclude that it is more beneficial to use the unconstrained position prediction formulation since it gives better performance and we do not need to handle the additional hyperparameter C max .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Constrained vs Unconstrained Position Prediction",
                "sec_num": "5.1"
            },
            {
                "text": "Next we investigate the importance of the scorer module by comparing the results between a model with and without it. Our results reveal that, by only relying on Gaussian alignment a N t and set a S t = 1, our model performance's was worse than one that used both the scorer and Gaussian alignment. This might be because the scorer modules are able to correct the details from the Gaussian alignment based on the relevance of the encoder states in the current decoder states. Thus, we conclude that alignment with the scorer is essential for our proposed models.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Alignment Scorer vs Non-Scorer",
                "sec_num": "5.2"
            },
            {
                "text": "Overall, our proposed encoder-decoder model with local monotonic attention significantly improved the performance and reduced the computational complexity in comparison with one that used standard global attention mechanism (we cannot compare directly with (Chorowski et al., 2014) since its pretrained with HMM state alignment). We also tried local-m attention from (Luong et al., 2015), however our model cannot converge and we hypothesize the reason is because ratio length between the speech and their corresponding text is larger than 1, therefore the Table 1 : Results from baseline and proposed models on ASR task with TIMIT test set.",
                "cite_spans": [
                    {
                        "start": 257,
                        "end": 281,
                        "text": "(Chorowski et al., 2014)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 557,
                        "end": 564,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Overall comparison to the baseline",
                "sec_num": "5.3"
            },
            {
                "text": "Test PER (%) Global Attention Model (Baseline) Att Enc-Dec (pretrained with HMM align) (Chorowski et al., 2014) 18.6",
                "cite_spans": [
                    {
                        "start": 87,
                        "end": 111,
                        "text": "(Chorowski et al., 2014)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": null
            },
            {
                "text": "Att Enc-Dec (Pereyra et al., 2017) 23.2 Att Enc-Dec (Luo et al., 2016) 24.5 Att Enc-Dec with MLP Scorer (ours) 23.8 Att Enc-Dec with local-m (ours) (Luong et al., 2015 \u2206p t cannot be represented by fixed value. The best performance achieved by our proposed model with unconstrained position prediction and bilinear scorer, and provided 12.2% relative error rate reduction to our baseline.",
                "cite_spans": [
                    {
                        "start": 12,
                        "end": 34,
                        "text": "(Pereyra et al., 2017)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 52,
                        "end": 70,
                        "text": "(Luo et al., 2016)",
                        "ref_id": null
                    },
                    {
                        "start": 104,
                        "end": 110,
                        "text": "(ours)",
                        "ref_id": null
                    },
                    {
                        "start": 148,
                        "end": 167,
                        "text": "(Luong et al., 2015",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model",
                "sec_num": null
            },
            {
                "text": "We also investigated our proposed architecture on G2P conversion task. Here, the model need to generate corresponding phoneme given small segment of characters and its always moving from left to right. The local property helps our attention module focus on certain parts from the grapheme source sequence that the decoder wants to convert into phoneme, and the monotonicity property strictly generates alignment left-to-right from beginning to the end of the grapheme source sequence.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment on Grapheme-to-Phoneme",
                "sec_num": "6"
            },
            {
                "text": "Here, we used the CMUDict dataset 2 . It contains 113438 words for training and 12753 for testing (12000 unique words). For validation, we randomly select 3000 sentences from the training set. The evaluation metrics for this task are phoneme error rate (PER) and word error rate (WER). In the evaluation process, there are some words has multiple references (pronunciations). Therefore, we select one of the references that has lowest PER between compared to our hypothesis, and if the hypothesis completely match with one of those references, then the WER is not increasing. For our encoder input, we used 26 letter (A-Z) + single quotes ('). For our decoder target, we used 39 phonemes plus the end of sequence mark (eos).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dataset",
                "sec_num": "6.1"
            },
            {
                "text": "On the encoder sides, the characters input were projected into 256 dims using embedding matrix. We used two bidirectional LSTMs (Bi-LSTM) for our encoder with 512 hidden units for each LSTM (total 1024 hidden units for Bi-LSTM). On the decoder sides, the phonemes input were projected into 256 dims using embedding matrix, followed by two unidirectional LSTMs with 512 hidden units. For local monotonic model, we used an MLP with 256 hidden units to generate \u2206p t and \u03bb t . For this task, we only used the unconstrained formulation because based on previous sections, we able to achieved better performance and we didn't need to find optimal hyperparameter for C max . For our scorer module, we used MLP scorer with 256 hidden units.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model Architectures",
                "sec_num": "6.2"
            },
            {
                "text": "In the decoding phase, we used beam search strategy with beam size 3 to generate the phonemes given the character sequences. For comparison, we evaluated our model with standard global attention and local-m attention model (Luong et al., 2015) as the baseline. Table 2 summarizes our experiment on proposed local attention models. We compared our proposed models with several baselines from other algorithm as well. Our model significantly improving the PER and WER compared to encoderdecoder, attention-based global softmax and localm attention (fixed-step size). Compared to Bi-LSTM model which was trained with explicit alignment, we achieve slightly better PER and WER with larger window size (2\u03c3 = 3).",
                "cite_spans": [
                    {
                        "start": 223,
                        "end": 243,
                        "text": "(Luong et al., 2015)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 261,
                        "end": 268,
                        "text": "Table 2",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Model Architectures",
                "sec_num": "6.2"
            },
            {
                "text": "We also conducted experiment on machine translation task, specifically between two languages with similar sentences structure. By using our proposed method, we able to focus only to a small related segment on the source side and the target generation process usually follows the source sentence structure without many reordering process.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiment on Machine Translation",
                "sec_num": "7"
            },
            {
                "text": "We used BTEC dataset (Kikui et al., 2003) and chose English-to-France and Indonesian-to-English parallel corpus. From BTEC dataset, we extracted 162318 sentences for training and 510 sentences for test data. Because there are no default development set, we randomly sampled 1000 sentences from training data for validation set. For all language pairs, we preprocessed our dataset using Moses (Koehn et al., 2007) tokenizer. For training, we replaced any word that appear less then twice with unknown (unk) symbol. In details, we keep 10105 words for French corpus, 8265 words for English corpus and 9577 words for Indonesian corpus. We only used sentence pairs where the source is no longer than 60 words in training phase.",
                "cite_spans": [
                    {
                        "start": 21,
                        "end": 41,
                        "text": "(Kikui et al., 2003)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 392,
                        "end": 412,
                        "text": "(Koehn et al., 2007)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Dataset",
                "sec_num": "7.1"
            },
            {
                "text": "On both encoder and decoder sides, the input words were projected into 256 dims using embedding matrix. We used three Bi-LSTM for our encoder with 512 hidden units for each LSTM (total 1024 hidden unit for Bi-LSTM). For our decoder, we used three LSTM with 512 hidden units. For local monotonic model, we used an MLP with 256 hidden units to generate \u2206p t and \u03bb t . Same as previous section, we only used the unconstrained for-mulation for local monotonic experiment. For our scorer module, we used MLP scorer with 256 hidden units. In the decoding phase, we used beam search strategy with beam size 5 and normalized length penalty with \u03b1 = 1 (Wu et al., 2016) . For comparison, we evaluate our model with standard global attention and local-m attention model (Luo et al., 2016) as the baseline. Table 3 summarizes our experiment on proposed local attention models compared to baseline global attention model and local-m attention model (Luong et al., 2015) . Generally, local monotonic attention had better result compared to global attention on both English-to-France and Indonesianto-English translation task. Our proposed model were able to improve the BLEU up to 2.2 points on English-to-France and 3.6 points on Indonesianto-English translation task compared to standard global attention. Compared to local-m attention with fixed step size, our proposed model able to improve the performance up to 0.8 BLEU on English-to-France and 2.0 BLEU on Indonesianto-English translation task.",
                "cite_spans": [
                    {
                        "start": 643,
                        "end": 660,
                        "text": "(Wu et al., 2016)",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 760,
                        "end": 778,
                        "text": "(Luo et al., 2016)",
                        "ref_id": null
                    },
                    {
                        "start": 937,
                        "end": 957,
                        "text": "(Luong et al., 2015)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 796,
                        "end": 803,
                        "text": "Table 3",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Model Architecture",
                "sec_num": "7.2"
            },
            {
                "text": "Humans do not generally process all of the information that they encounter at once. Selective attention, which is a critical property in human perception, allows attention to be focused on particular information while filtering out a range of other information. The biological structure of the eye and the eye movement mechanism is one part of visual selective attention that provides the ability to focus attention selectively on parts of the visual space to acquire information when and where it is needed (Rensink, 2000) . In the case of the cocktail party effect, humans can selectively focus their attentive hearing on a single speaker among various conversation and background noise sources (Cherry, 1953) . The attention mechanism in deep learning has been studied for many years. But, only recently have attention mechanisms made their way into the sequence-to-sequence deep learning architectures that were proposed to solve machine translation tasks. Such mechanisms provide a model with the ability to jointly align and translate . With the attention-based model, the encoder-decoder model significantly improved the performance on machine translation Luong et al., 2015) and has successfully been applied to ASR tasks (Chorowski et al., 2014; Chan et al., 2016) .",
                "cite_spans": [
                    {
                        "start": 508,
                        "end": 523,
                        "text": "(Rensink, 2000)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 697,
                        "end": 711,
                        "text": "(Cherry, 1953)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 1163,
                        "end": 1182,
                        "text": "Luong et al., 2015)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 1230,
                        "end": 1254,
                        "text": "(Chorowski et al., 2014;",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 1255,
                        "end": 1273,
                        "text": "Chan et al., 2016)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "8"
            },
            {
                "text": "However, as we mentioned earlier, most of those attention mechanism are based on \"global\" property, where the attention module tries to match the current hidden states with all the states from the encoder sides. This approach is inefficient and computationally expensive on longer source sequences. A \"local attention\" was recently introduced by (Luong et al., 2015) which provided the capability to only focus small subset of the encoder sides. They also proposed monotonic attention but limited to fixed step-size and not suitable for a task where the length ratio between source and target sequence is vastly different. Our proposed method are able to elevated this problem by predicting the step size dynamically instead of using fixed step size. After we constructed our proposed framework, we found work by (Raffel et al., 2017) recently that also proposed a method for producing monotonic alignment by using Bernoulli random variable to control when the alignment should stop and generate output. However, it cannot attend the source sequence outside the range between previous and current position. In contrast with our approach, we are able to control how large the area we want to attend based on the window size. (Chorowski et al., 2014) also proposed a soft constraint to encourage monotonicity by invoking a penalty based on the current alignment and previous alignments. However, the methods still did not guarantee a monotonicity movement of the attention.",
                "cite_spans": [
                    {
                        "start": 346,
                        "end": 366,
                        "text": "(Luong et al., 2015)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 813,
                        "end": 834,
                        "text": "(Raffel et al., 2017)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 1224,
                        "end": 1248,
                        "text": "(Chorowski et al., 2014)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "8"
            },
            {
                "text": "To the best of our knowledge, only few studies have explored about local and monotonicity properties on an attention-based model. This work presents a novel attention module with locality and monotonicity properties. Our proposed mechanism strictly enforces monotonicity and locality properties in their alignment by explicitly modeling them in mathematical equations. The observation on our proposed model can also possibly act as regularizer by only observed a subset of encoder states. Here, we also explore various ways to control both properties and evaluate the impact of each variations on our proposed model. Experimental results also demonstrate that the proposed encoder-decoder model with local monotonic attention could provide a better performances in comparison with the standard global attention architecture and local-m attention model (Luong et al., 2015) .",
                "cite_spans": [
                    {
                        "start": 852,
                        "end": 872,
                        "text": "(Luong et al., 2015)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "8"
            },
            {
                "text": "This paper demonstrated a novel attention mechanism for encoder decoder model that ensures monotonicity and locality properties. We explored various ways to control these properties, including dynamic monotonicity-based position prediction and locality-based alignment generation. The results reveal our proposed encoder-decoder model with local monotonic attention significantly improved the performance on three different tasks and able to reduced the computational complexity more than one that used standard global attention architecture.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "9"
            },
            {
                "text": "Part of this work was supported by JSPS KAKENHI Grant Numbers JP17H06101 and JP17K00237.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgement",
                "sec_num": "10"
            },
            {
                "text": "https://catalog.ldc.upenn.edu/ldc93s1",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "CMUdict: https://sourceforge.net/ projects/cmusphinx/files/G2P%20Models/ phonetisaurus-cmudict-split.tar.gz",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Neural machine translation by jointly learning to align and translate",
                "authors": [
                    {
                        "first": "Dzmitry",
                        "middle": [],
                        "last": "Bahdanau",
                        "suffix": ""
                    },
                    {
                        "first": "Kyunghyun",
                        "middle": [],
                        "last": "Cho",
                        "suffix": ""
                    },
                    {
                        "first": "Yoshua",
                        "middle": [],
                        "last": "Bengio",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1409.0473"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Endto-end attention-based large vocabulary speech recognition",
                "authors": [
                    {
                        "first": "Dzmitry",
                        "middle": [],
                        "last": "Bahdanau",
                        "suffix": ""
                    },
                    {
                        "first": "Jan",
                        "middle": [],
                        "last": "Chorowski",
                        "suffix": ""
                    },
                    {
                        "first": "Dmitriy",
                        "middle": [],
                        "last": "Serdyuk",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Acoustics, Speech and Signal Processing (ICASSP)",
                "volume": "",
                "issue": "",
                "pages": "4945--4949",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, and Yoshua Bengio. 2016. End- to-end attention-based large vocabulary speech recognition. In Acoustics, Speech and Signal Pro- cessing (ICASSP), 2016 IEEE International Confer- ence on, pages 4945-4949. IEEE.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition",
                "authors": [
                    {
                        "first": "William",
                        "middle": [],
                        "last": "Chan",
                        "suffix": ""
                    },
                    {
                        "first": "Navdeep",
                        "middle": [],
                        "last": "Jaitly",
                        "suffix": ""
                    },
                    {
                        "first": "Quoc",
                        "middle": [],
                        "last": "Le",
                        "suffix": ""
                    },
                    {
                        "first": "Oriol",
                        "middle": [],
                        "last": "Vinyals",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Acoustics, Speech and Signal Processing (ICASSP)",
                "volume": "",
                "issue": "",
                "pages": "4960--4964",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "William Chan, Navdeep Jaitly, Quoc Le, and Oriol Vinyals. 2016. Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In Acoustics, Speech and Signal Pro- cessing (ICASSP), 2016 IEEE International Confer- ence on, pages 4960-4964. IEEE.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Some experiments on the recognition of speech, with one and with two ears",
                "authors": [
                    {
                        "first": "Cherry",
                        "middle": [],
                        "last": "E Colin",
                        "suffix": ""
                    }
                ],
                "year": 1953,
                "venue": "The Journal of the acoustical society of America",
                "volume": "25",
                "issue": "5",
                "pages": "975--979",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "E Colin Cherry. 1953. Some experiments on the recognition of speech, with one and with two ears. The Journal of the acoustical society of America, 25(5):975-979.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "On the properties of neural machine translation: Encoder-decoder approaches. Syntax, Semantics and Structure in Statistical Translation",
                "authors": [
                    {
                        "first": "Kyunghyun",
                        "middle": [],
                        "last": "Cho",
                        "suffix": ""
                    },
                    {
                        "first": "Bart",
                        "middle": [],
                        "last": "Van Merri\u00ebnboer",
                        "suffix": ""
                    },
                    {
                        "first": "Dzmitry",
                        "middle": [],
                        "last": "Bahdanau",
                        "suffix": ""
                    },
                    {
                        "first": "Yoshua",
                        "middle": [],
                        "last": "Bengio",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kyunghyun Cho, Bart van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014a. On the proper- ties of neural machine translation: Encoder-decoder approaches. Syntax, Semantics and Structure in Sta- tistical Translation, page 103.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
                "authors": [
                    {
                        "first": "Kyunghyun",
                        "middle": [],
                        "last": "Cho",
                        "suffix": ""
                    },
                    {
                        "first": "Bart",
                        "middle": [],
                        "last": "Van Merri\u00ebnboer",
                        "suffix": ""
                    },
                    {
                        "first": "Caglar",
                        "middle": [],
                        "last": "Gulcehre",
                        "suffix": ""
                    },
                    {
                        "first": "Dzmitry",
                        "middle": [],
                        "last": "Bahdanau",
                        "suffix": ""
                    },
                    {
                        "first": "Fethi",
                        "middle": [],
                        "last": "Bougares",
                        "suffix": ""
                    },
                    {
                        "first": "Holger",
                        "middle": [],
                        "last": "Schwenk",
                        "suffix": ""
                    },
                    {
                        "first": "Yoshua",
                        "middle": [],
                        "last": "Bengio",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1406.1078"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "End-to-end continuous speech recognition using attention-based recurrent NN: First results",
                "authors": [
                    {
                        "first": "Jan",
                        "middle": [],
                        "last": "Chorowski",
                        "suffix": ""
                    },
                    {
                        "first": "Dzmitry",
                        "middle": [],
                        "last": "Bahdanau",
                        "suffix": ""
                    },
                    {
                        "first": "Kyunghyun",
                        "middle": [],
                        "last": "Cho",
                        "suffix": ""
                    },
                    {
                        "first": "Yoshua",
                        "middle": [],
                        "last": "Bengio",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1412.1602"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Jan Chorowski, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. End-to-end continuous speech recognition using attention-based recurrent NN: First results. arXiv preprint arXiv:1412.1602.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Darpa TIMIT acoustic-phonetic continous speech corpus cd-rom",
                "authors": [
                    {
                        "first": "Lori",
                        "middle": [
                            "F"
                        ],
                        "last": "John S Garofolo",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Lamel",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "William",
                        "suffix": ""
                    },
                    {
                        "first": "Jonathon",
                        "middle": [
                            "G"
                        ],
                        "last": "Fisher",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [
                            "S"
                        ],
                        "last": "Fiscus",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Pallett",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "NIST speech disc 1-1.1. NASA STI/Recon technical report n",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "John S Garofolo, Lori F Lamel, William M Fisher, Jonathon G Fiscus, and David S Pallett. 1993. Darpa TIMIT acoustic-phonetic continous speech corpus cd-rom. NIST speech disc 1-1.1. NASA STI/Recon technical report n, 93.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Supervised sequence labelling",
                "authors": [
                    {
                        "first": "Alex",
                        "middle": [],
                        "last": "Graves",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Supervised Sequence Labelling with Recurrent Neural Networks",
                "volume": "",
                "issue": "",
                "pages": "5--13",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Alex Graves. 2012. Supervised sequence labelling. In Supervised Sequence Labelling with Recurrent Neu- ral Networks, pages 5-13. Springer.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Deep visualsemantic alignments for generating image descriptions",
                "authors": [
                    {
                        "first": "Andrej",
                        "middle": [],
                        "last": "Karpathy",
                        "suffix": ""
                    },
                    {
                        "first": "Li",
                        "middle": [],
                        "last": "Fei-Fei",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
                "volume": "",
                "issue": "",
                "pages": "3128--3137",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Andrej Karpathy and Li Fei-Fei. 2015. Deep visual- semantic alignments for generating image descrip- tions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3128-3137.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Creating corpora for speech-to-speech translation",
                "authors": [
                    {
                        "first": "Genichiro",
                        "middle": [],
                        "last": "Kikui",
                        "suffix": ""
                    },
                    {
                        "first": "Eiichiro",
                        "middle": [],
                        "last": "Sumita",
                        "suffix": ""
                    },
                    {
                        "first": "Toshiyuki",
                        "middle": [],
                        "last": "Takezawa",
                        "suffix": ""
                    },
                    {
                        "first": "Seiichi",
                        "middle": [],
                        "last": "Yamamoto",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Eighth European Conference on Speech Communication and Technology",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Genichiro Kikui, Eiichiro Sumita, Toshiyuki Takezawa, and Seiichi Yamamoto. 2003. Creating corpora for speech-to-speech translation. In Eighth European Conference on Speech Communication and Technology.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Joint ctc-attention based end-to-end speech recognition using multi-task learning",
                "authors": [
                    {
                        "first": "Suyoun",
                        "middle": [],
                        "last": "Kim",
                        "suffix": ""
                    },
                    {
                        "first": "Takaaki",
                        "middle": [],
                        "last": "Hori",
                        "suffix": ""
                    },
                    {
                        "first": "Shinji",
                        "middle": [],
                        "last": "Watanabe",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Acoustics, Speech and Signal Processing",
                "volume": "",
                "issue": "",
                "pages": "4835--4839",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Suyoun Kim, Takaaki Hori, and Shinji Watanabe. 2017. Joint ctc-attention based end-to-end speech recognition using multi-task learning. In Acous- tics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, pages 4835- 4839. IEEE.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Adam: A method for stochastic optimization",
                "authors": [
                    {
                        "first": "Diederik",
                        "middle": [],
                        "last": "Kingma",
                        "suffix": ""
                    },
                    {
                        "first": "Jimmy",
                        "middle": [],
                        "last": "Ba",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1412.6980"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Moses: Open source toolkit for statistical machine translation",
                "authors": [
                    {
                        "first": "Philipp",
                        "middle": [],
                        "last": "Koehn",
                        "suffix": ""
                    },
                    {
                        "first": "Hieu",
                        "middle": [],
                        "last": "Hoang",
                        "suffix": ""
                    },
                    {
                        "first": "Alexandra",
                        "middle": [],
                        "last": "Birch",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Callison-Burch",
                        "suffix": ""
                    },
                    {
                        "first": "Marcello",
                        "middle": [],
                        "last": "Federico",
                        "suffix": ""
                    },
                    {
                        "first": "Nicola",
                        "middle": [],
                        "last": "Bertoldi",
                        "suffix": ""
                    },
                    {
                        "first": "Brooke",
                        "middle": [],
                        "last": "Cowan",
                        "suffix": ""
                    },
                    {
                        "first": "Wade",
                        "middle": [],
                        "last": "Shen",
                        "suffix": ""
                    },
                    {
                        "first": "Christine",
                        "middle": [],
                        "last": "Moran",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Zens",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions",
                "volume": "",
                "issue": "",
                "pages": "177--180",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In Pro- ceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, pages 177-180. Association for Computational Linguis- tics.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Navdeep Jaitly, and Ilya Sutskever. 2016. Learning online alignments with continuous rewards policy gradient",
                "authors": [
                    {
                        "first": "Yuping",
                        "middle": [],
                        "last": "Luo",
                        "suffix": ""
                    },
                    {
                        "first": "Chung-Cheng",
                        "middle": [],
                        "last": "Chiu",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1608.01281"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Yuping Luo, Chung-Cheng Chiu, Navdeep Jaitly, and Ilya Sutskever. 2016. Learning online alignments with continuous rewards policy gradient. arXiv preprint arXiv:1608.01281.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Effective approaches to attentionbased neural machine translation",
                "authors": [
                    {
                        "first": "Minh-Thang",
                        "middle": [],
                        "last": "Luong",
                        "suffix": ""
                    },
                    {
                        "first": "Hieu",
                        "middle": [],
                        "last": "Pham",
                        "suffix": ""
                    },
                    {
                        "first": "Christopher D",
                        "middle": [],
                        "last": "Manning",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1508.04025"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Regularizing neural networks by penalizing confident output distributions",
                "authors": [
                    {
                        "first": "Gabriel",
                        "middle": [],
                        "last": "Pereyra",
                        "suffix": ""
                    },
                    {
                        "first": "George",
                        "middle": [],
                        "last": "Tucker",
                        "suffix": ""
                    },
                    {
                        "first": "Jan",
                        "middle": [],
                        "last": "Chorowski",
                        "suffix": ""
                    },
                    {
                        "first": "\u0141ukasz",
                        "middle": [],
                        "last": "Kaiser",
                        "suffix": ""
                    },
                    {
                        "first": "Geoffrey",
                        "middle": [],
                        "last": "Hinton",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1701.06548"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Gabriel Pereyra, George Tucker, Jan Chorowski, \u0141ukasz Kaiser, and Geoffrey Hinton. 2017. Regular- izing neural networks by penalizing confident output distributions. arXiv preprint arXiv:1701.06548.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "The Kaldi speech recognition toolkit",
                "authors": [
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Povey",
                        "suffix": ""
                    },
                    {
                        "first": "Arnab",
                        "middle": [],
                        "last": "Ghoshal",
                        "suffix": ""
                    },
                    {
                        "first": "Gilles",
                        "middle": [],
                        "last": "Boulianne",
                        "suffix": ""
                    },
                    {
                        "first": "Lukas",
                        "middle": [],
                        "last": "Burget",
                        "suffix": ""
                    },
                    {
                        "first": "Ondrej",
                        "middle": [],
                        "last": "Glembek",
                        "suffix": ""
                    },
                    {
                        "first": "Nagendra",
                        "middle": [],
                        "last": "Goel",
                        "suffix": ""
                    },
                    {
                        "first": "Mirko",
                        "middle": [],
                        "last": "Hannemann",
                        "suffix": ""
                    },
                    {
                        "first": "Petr",
                        "middle": [],
                        "last": "Motlicek",
                        "suffix": ""
                    },
                    {
                        "first": "Yanmin",
                        "middle": [],
                        "last": "Qian",
                        "suffix": ""
                    },
                    {
                        "first": "Petr",
                        "middle": [],
                        "last": "Schwarz",
                        "suffix": ""
                    },
                    {
                        "first": "Jan",
                        "middle": [],
                        "last": "Silovsky",
                        "suffix": ""
                    },
                    {
                        "first": "Georg",
                        "middle": [],
                        "last": "Stemmer",
                        "suffix": ""
                    },
                    {
                        "first": "Karel",
                        "middle": [],
                        "last": "Vesely",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "IEEE 2011 Workshop on Automatic Speech Recognition and Understanding",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Daniel Povey, Arnab Ghoshal, Gilles Boulianne, Lukas Burget, Ondrej Glembek, Nagendra Goel, Mirko Hannemann, Petr Motlicek, Yanmin Qian, Petr Schwarz, Jan Silovsky, Georg Stemmer, and Karel Vesely. 2011. The Kaldi speech recognition toolkit. In IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Pro- cessing Society. IEEE Catalog No.: CFP11SRW- USB.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Online and linear-time attention by enforcing monotonic alignments",
                "authors": [
                    {
                        "first": "Colin",
                        "middle": [],
                        "last": "Raffel",
                        "suffix": ""
                    },
                    {
                        "first": "Thang",
                        "middle": [],
                        "last": "Luong",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Peter",
                        "suffix": ""
                    },
                    {
                        "first": "Ron",
                        "middle": [
                            "J"
                        ],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "Douglas",
                        "middle": [],
                        "last": "Weiss",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Eck",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1704.00784"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Colin Raffel, Thang Luong, Peter J Liu, Ron J Weiss, and Douglas Eck. 2017. Online and linear-time at- tention by enforcing monotonic alignments. arXiv preprint arXiv:1704.00784.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "The dynamic representation of scenes",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Ronald",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Rensink",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Visual cognition",
                "volume": "7",
                "issue": "1-3",
                "pages": "17--42",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ronald A Rensink. 2000. The dynamic representation of scenes. Visual cognition, 7(1-3):17-42.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Sequence-to-Sequence learning with neural networks",
                "authors": [
                    {
                        "first": "Ilya",
                        "middle": [],
                        "last": "Sutskever",
                        "suffix": ""
                    },
                    {
                        "first": "Oriol",
                        "middle": [],
                        "last": "Vinyals",
                        "suffix": ""
                    },
                    {
                        "first": "Quoc V",
                        "middle": [],
                        "last": "Le",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Advances in neural information processing systems",
                "volume": "",
                "issue": "",
                "pages": "3104--3112",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence-to-Sequence learning with neural net- works. In Advances in neural information process- ing systems, pages 3104-3112.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Chainer: a next-generation open source framework for deep learning",
                "authors": [
                    {
                        "first": "Seiya",
                        "middle": [],
                        "last": "Tokui",
                        "suffix": ""
                    },
                    {
                        "first": "Kenta",
                        "middle": [],
                        "last": "Oono",
                        "suffix": ""
                    },
                    {
                        "first": "Shohei",
                        "middle": [],
                        "last": "Hido",
                        "suffix": ""
                    },
                    {
                        "first": "Justin",
                        "middle": [],
                        "last": "Clayton",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of Workshop on Machine Learning Systems (Learn-ingSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS)",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Seiya Tokui, Kenta Oono, Shohei Hido, and Justin Clayton. 2015. Chainer: a next-generation open source framework for deep learning. In Proceedings of Workshop on Machine Learning Systems (Learn- ingSys) in The Twenty-ninth Annual Conference on Neural Information Processing Systems (NIPS).",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
                "authors": [
                    {
                        "first": "Yonghui",
                        "middle": [],
                        "last": "Wu",
                        "suffix": ""
                    },
                    {
                        "first": "Mike",
                        "middle": [],
                        "last": "Schuster",
                        "suffix": ""
                    },
                    {
                        "first": "Zhifeng",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "V",
                        "middle": [],
                        "last": "Quoc",
                        "suffix": ""
                    },
                    {
                        "first": "Mohammad",
                        "middle": [],
                        "last": "Le",
                        "suffix": ""
                    },
                    {
                        "first": "Wolfgang",
                        "middle": [],
                        "last": "Norouzi",
                        "suffix": ""
                    },
                    {
                        "first": "Maxim",
                        "middle": [],
                        "last": "Macherey",
                        "suffix": ""
                    },
                    {
                        "first": "Yuan",
                        "middle": [],
                        "last": "Krikun",
                        "suffix": ""
                    },
                    {
                        "first": "Qin",
                        "middle": [],
                        "last": "Cao",
                        "suffix": ""
                    },
                    {
                        "first": "Klaus",
                        "middle": [],
                        "last": "Gao",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Macherey",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1609.08144"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural ma- chine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "Show, attend and tell: Neural image caption generation with visual attention",
                "authors": [
                    {
                        "first": "Kelvin",
                        "middle": [],
                        "last": "Xu",
                        "suffix": ""
                    },
                    {
                        "first": "Jimmy",
                        "middle": [],
                        "last": "Ba",
                        "suffix": ""
                    },
                    {
                        "first": "Ryan",
                        "middle": [],
                        "last": "Kiros",
                        "suffix": ""
                    },
                    {
                        "first": "Kyunghyun",
                        "middle": [],
                        "last": "Cho",
                        "suffix": ""
                    },
                    {
                        "first": "Aaron",
                        "middle": [
                            "C"
                        ],
                        "last": "Courville",
                        "suffix": ""
                    },
                    {
                        "first": "Ruslan",
                        "middle": [],
                        "last": "Salakhutdinov",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [
                            "S"
                        ],
                        "last": "Zemel",
                        "suffix": ""
                    },
                    {
                        "first": "Yoshua",
                        "middle": [],
                        "last": "Bengio",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of the 32nd International Conference on Machine Learning",
                "volume": "",
                "issue": "",
                "pages": "2048--2057",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron C. Courville, Ruslan Salakhutdinov, Richard S. Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In Proceedings of the 32nd In- ternational Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, pages 2048- 2057.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "Sequenceto-sequence neural net models for grapheme-tophoneme conversion",
                "authors": [
                    {
                        "first": "Kaisheng",
                        "middle": [],
                        "last": "Yao",
                        "suffix": ""
                    },
                    {
                        "first": "Geoffrey",
                        "middle": [],
                        "last": "Zweig",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Sixteenth Annual Conference of the International Speech Communication Association",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kaisheng Yao and Geoffrey Zweig. 2015. Sequence- to-sequence neural net models for grapheme-to- phoneme conversion. In Sixteenth Annual Confer- ence of the International Speech Communication As- sociation.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "uris": null,
                "text": "Local monotonic attention. Part (1) of Figure 2. Assume we have source sequence with length S, which is encoded by the stack of Bi-LSTM (see Figure 1) into S encoded states h e = [h e 1 , ..., h e S ]",
                "num": null,
                "type_str": "figure"
            },
            "TABREF1": {
                "text": "Results from baseline and proposed method on G2P task with CMUDict test set",
                "content": "<table><tr><td>Model</td><td>PER (%)</td><td>WER (%)</td></tr><tr><td>Baseline</td><td/><td/></tr><tr><td>Enc-Dec LSTM (2 lyr) (Yao and Zweig, 2015)</td><td colspan=\"2\">7.63 28.61</td></tr><tr><td>Bi-LSTM (3 lyr) (Yao and Zweig, 2015)</td><td colspan=\"2\">5.45 23.55</td></tr><tr><td>Att Enc-Dec with Global MLP Scorer (ours)</td><td colspan=\"2\">5.96 25.55</td></tr><tr><td>Att Enc-Dec with local-m (ours) (Luong et al., 2015)</td><td colspan=\"2\">5.64 24.32</td></tr><tr><td>Proposed</td><td/><td/></tr><tr><td>Att Enc-Dec + Unconst (exp) (2\u03c3 = 2)</td><td colspan=\"2\">5.45 23.15</td></tr><tr><td>Att Enc-Dec + Unconst (exp) (2\u03c3 = 3)</td><td colspan=\"2\">5.43 23.19</td></tr></table>",
                "num": null,
                "type_str": "table",
                "html": null
            },
            "TABREF2": {
                "text": "",
                "content": "<table><tr><td colspan=\"2\">: Results from baseline and proposed</td></tr><tr><td colspan=\"2\">method on English-to-France and Indonesian-to-</td></tr><tr><td>English translation tasks.</td><td/></tr><tr><td>Model</td><td>BLEU</td></tr><tr><td colspan=\"2\">BTEC English to France</td></tr><tr><td>Baseline</td><td/></tr><tr><td>Att Enc-Dec with Global MLP Scorer</td><td>49.0</td></tr><tr><td>Att Enc-Dec with local-m (ours) (Luong et al., 2015)</td><td>50.4</td></tr><tr><td>Proposed</td><td/></tr><tr><td>Att Enc-Dec + Unconst (exp) (2\u03c3 = 4)</td><td>51.2</td></tr><tr><td>Att Enc-Dec + Unconst (exp) (2\u03c3 = 6)</td><td>51.1</td></tr><tr><td colspan=\"2\">BTEC Indonesian to English</td></tr><tr><td>Baseline</td><td/></tr><tr><td>Att Enc-Dec with Global MLP Scorer</td><td>38.2</td></tr><tr><td>Att Enc-Dec with local-m (ours) (Luong et al., 2015)</td><td>39.8</td></tr><tr><td>Proposed</td><td/></tr><tr><td>Att Enc-Dec + Unconst (exp) (2\u03c3 = 4)</td><td>40.9</td></tr><tr><td>Att Enc-Dec + Unconst (exp) (2\u03c3 = 6)</td><td>41.8</td></tr><tr><td>7.3 Result Discussion</td><td/></tr></table>",
                "num": null,
                "type_str": "table",
                "html": null
            }
        }
    }
}