File size: 101,135 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
{
    "paper_id": "2021",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T07:24:35.708808Z"
    },
    "title": "Builder, we have done it: Evaluating & Extending Dialogue-AMR NLU Pipeline for Two Collaborative Domains",
    "authors": [
        {
            "first": "Claire",
            "middle": [],
            "last": "Bonial",
            "suffix": "",
            "affiliation": {
                "laboratory": "Army Research Laboratory",
                "institution": "U.S",
                "location": {
                    "postCode": "20783",
                    "settlement": "Adelphi",
                    "region": "MD"
                }
            },
            "email": "claire.n.bonial.civ@mail.mil"
        },
        {
            "first": "Mitchell",
            "middle": [],
            "last": "Abrams",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "Institute for Human and Machine Cognition",
                "location": {
                    "postCode": "32502",
                    "settlement": "Pensacola",
                    "region": "FL"
                }
            },
            "email": ""
        },
        {
            "first": "David",
            "middle": [],
            "last": "Traum",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "USC Institute for Creative Technologies",
                "location": {
                    "postCode": "90094",
                    "settlement": "Playa Vista",
                    "region": "CA"
                }
            },
            "email": ""
        },
        {
            "first": "Clare",
            "middle": [
                "R"
            ],
            "last": "Voss",
            "suffix": "",
            "affiliation": {
                "laboratory": "Army Research Laboratory",
                "institution": "U.S",
                "location": {
                    "postCode": "20783",
                    "settlement": "Adelphi",
                    "region": "MD"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "We adopt, evaluate, and improve upon a twostep natural language understanding (NLU) pipeline that incrementally tames the variation of unconstrained natural language input and maps to executable robot behaviors. The pipeline first leverages Abstract Meaning Representation (AMR) parsing to capture the propositional content of the utterance, and second converts this into \"Dialogue-AMR,\" which augments standard AMR with information on tense, aspect, and speech acts. Several alternative approaches and training data sets are evaluated for both steps and corresponding components of the pipeline, some of which outperform the original. We extend the Dialogue-AMR annotation schema to cover a different collaborative instruction domain and evaluate on both domains. With very little training data, we achieve promising performance in the new domain, demonstrating the scalability of this approach.",
    "pdf_parse": {
        "paper_id": "2021",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "We adopt, evaluate, and improve upon a twostep natural language understanding (NLU) pipeline that incrementally tames the variation of unconstrained natural language input and maps to executable robot behaviors. The pipeline first leverages Abstract Meaning Representation (AMR) parsing to capture the propositional content of the utterance, and second converts this into \"Dialogue-AMR,\" which augments standard AMR with information on tense, aspect, and speech acts. Several alternative approaches and training data sets are evaluated for both steps and corresponding components of the pipeline, some of which outperform the original. We extend the Dialogue-AMR annotation schema to cover a different collaborative instruction domain and evaluate on both domains. With very little training data, we achieve promising performance in the new domain, demonstrating the scalability of this approach.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "We adopt, evaluate, and improve upon the two-step NLU pipeline, described in , which aims to incrementally tame the variation of incoming natural language that the robot must interpret before responding. For each domain in which it operates, the robot must determine whether or not the commands it receives correspond to one of its executable behaviors, such as MOVEMENT (along a front-back axis) and ROTATION. The NLU pipeline leverages AMR to capture the basic content of the input language, and then a conversion system adds behavior time, completion status and speech act information to the original \"Standard-AMR,\" and updates the main action relation of the input AMR to a relation consistently representing an executable robot behavior (see Fig. 1 for a Standard and Dialogue-AMR example comparison). There are two high-level components of the NLU pipeline: a Standard-AMR parser and a graphto-graph conversion system to convert the Standard-AMR into Dialogue-AMR. Here, we offer the first evaluation of both the Dialogue-AMR annotation schema itself and the components of the pipeline used to automatically obtain the Dialogue-AMR. We test not only in the human-robot, search-andnavigation dialogue domain for which the schema and pipeline was developed, but also in a somewhat similar, yet challenging domain: human-human communication collaboratively building structures in the virtual gaming environment, \"Minecraft.\" In this way, we address the question of what would happen if we wanted our robot to collaborate on a new and different task. We refer to this challenge as \"domain extension,\" instead of \"domain adaptation,\" as we aim to maintain the coverage of our original domain while extending to a new one. After providing background on AMR and Dialogue-AMR ( \u00a72) and detailing our approach ( \u00a73), we report on the human-robot evaluation ( \u00a74), followed by the Minecraft evaluation ( \u00a75), and domain extension of the conversion system and subsequent evaluation ( \u00a76). Our contributions include: i. Retraining existing Standard-AMR parsers (3.1) and evaluating on the human-robot (4.1) and Minecraft domains (5.1); ii. Evaluating and improving a conversion system for automatically obtaining Dialogue-AMR (3.2) in both the robot (4.2) and Minecraft (5.2) domains; iii. Extending the coverage of the Dialogue-AMR annotation schema (2.1) to a new domain (6.1) and evaluation after domain extension (6.3).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 748,
                        "end": 754,
                        "text": "Fig. 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "To summarize where this work is situated with respect to the past research on this topic-while details the Dialogue-AMR annotation schema and proposes the two-step pipeline as one way of automatically obtaining Dialogue-AMR, the technical details of an implementation of the pipeline itself are not described and no evaluation is given. Subsequent research from does provide an initial evaluation of a baseline version of the graph-to-graph conversion component of the proposed two-step pipeline; we adopt and evaluate an updated version of this component (described in greater detail in \u00a73.2), however, our evaluation is not directly comparable to the evaluation given in , since the earlier version of the component was tested on only a limited subset of the annotation categories of Dialogue-AMR. Thus, the current paper constitutes the first evaluation of the proposed two-step pipeline and its components, as well as an evaluation of the extensibility of those components and the Dialogue-AMR schema itself to a new domain.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Background",
                "sec_num": "2"
            },
            {
                "text": "The two-step NLU pipeline of leverages AMR, as it abstracts away from some idiosyncratic surface variation in favor of a more consistent representation for the same concept. This serves the purposes of a dialogue system well: AMR smooths over the nuances of language that may be unimportant for mapping a particular input to one of the robot's behaviors. Nonetheless, \"Standard-AMR\" does not represent some aspects of meaning that are critical for the human-robot dialogue domain, where the robot must be cued as to what the current dialogue state is, as well as what the current time and completion status of various instructions are. To capture this information, the NLU pipeline uses the \"Dialogue-AMR\" formalism , which adds action time, completion status (i.e., limited tense, aspect) and speech act information to the Standard-AMR. Additionally, to facilitate the final step of mapping to one of the robot's behaviors, Dialogue-AMR further generalizes from the input language, converting a variety of surface realizations (e.g., turn, rotate, pivot) of a particular action relation into a single canonical numbered relation (e.g., turn-01) to represent one of the robot's behaviors (e.g., RO-TATION). Standard-AMR and Dialogue-AMR are contrasted in Figs. 1 and 2 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 1255,
                        "end": 1268,
                        "text": "Figs. 1 and 2",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "AMR & Dialogue-AMR",
                "sec_num": "2.1"
            },
            {
                "text": "In Dialogue-AMR, the content of the Standard-AMR is nested in a structure that adds the speech act information as the root predicate (e.g., command-SA in Figs. 1, 2) . Additionally, the main action from the Standard-AMR (e.g., move-01) is converted to one of the action relations (e.g., go-02), termed the \"robot-concept relation\" that maps to an executable robot behavior. Information about the time of that behavior is added (in Fig. 2 , the motion event will happen in the future, after the speaking time of the command; thus, it is represented as :time after-now). 1 Finally, the behavior completion status, a type of aspect information, is added-whether or not the instructed behavior is telic or contains a clear end point (in Fig. 2 , indicated by completable +). 2 Dialogue-AMR draws upon an inventory of 13 speech acts and 26 robot behaviors or \"robotconcept relations.\" Action time and completion status are integrated into Dialogue-AMR by adopting the annotation schema of Donatelli et al. (2018) , which categorizes the robot behavior as past, present, or future, and categorizes 4 aspectual labels: :stable +/-, :ongoing +/-, :complete +/-, and :habitual +/-. Dialogue-AMR uses the added category :completable +/-to signal whether or not a hypothetical event has an end-goal achievable for the robot.",
                "cite_spans": [
                    {
                        "start": 771,
                        "end": 772,
                        "text": "2",
                        "ref_id": null
                    },
                    {
                        "start": 984,
                        "end": 1007,
                        "text": "Donatelli et al. (2018)",
                        "ref_id": "BIBREF8"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 154,
                        "end": 165,
                        "text": "Figs. 1, 2)",
                        "ref_id": "FIGREF0"
                    },
                    {
                        "start": 431,
                        "end": 437,
                        "text": "Fig. 2",
                        "ref_id": "FIGREF1"
                    },
                    {
                        "start": 733,
                        "end": 739,
                        "text": "Fig. 2",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "AMR & Dialogue-AMR",
                "sec_num": "2.1"
            },
            {
                "text": "We draw from two datasets with Standard-AMR annotations, collected with the aim of developing an interactive agent for collaboration in grounded scenarios. We leverage the DialAMR corpus (Bonial et al., 2020) as training and evaluation data for the NLU pipeline within the human-robot dialogue domain. DialAMR encompasses 1122 instances of The Situated Corpus of Understanding Trans- 1 In ongoing work to extend the Dialogue-AMR schema, we plan to refine the :time annotations to better capture the possibility that an instructed action could already be underway at speaking time, given that we observed that in highly collaborative dialogue, utterances often overlap with actions.",
                "cite_spans": [
                    {
                        "start": 384,
                        "end": 385,
                        "text": "1",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Annotated Corpora",
                "sec_num": "2.2"
            },
            {
                "text": "2 End-point information is needed by a robot to execute a behavior in a low-bandwidth environment where there is a communications lag, precluding real-time voice teleoperation. What constitutes a fully specified behavior is somewhat task and robot-specific; for example, a robot with a static, front-facing camera can assume, as a default, that a picture taken for a user will be from this perspective unless the user specifies otherwise, but a robot with a movable, 360-degree view camera may need to ask the user to provide information on the desired camera angle. actions (SCOUT), annotated with both Standard-AMR and Dialogue-AMR. SCOUT is comprised of over 80 hours of dialogues from the robot navigation domain (Marge et al., 2016 (Marge et al., , 2017 , collected via a \"Wizard-of-Oz\" experimental design (Riek, 2012) , in which participants directed what they believed to be an autonomous robot to complete search and navigation tasks. The DialAMR corpus was used in the development of the Dialogue-AMR schema, as well as training and testing of the components of the conversion system of Abrams et al. 2020, which we initially adopt, described in \u00a73.2. The data from SCOUT selected for the Di-alAMR corpus includes a randomly selected, continuous 20-minute experimental trial, which contains 304 utterances (called the Continuous-Trial subset). This is the held-out test set that we use throughout our \"in-domain\" evaluation, as it is representative of an ongoing human-robot interaction.",
                "cite_spans": [
                    {
                        "start": 717,
                        "end": 736,
                        "text": "(Marge et al., 2016",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 737,
                        "end": 758,
                        "text": "(Marge et al., , 2017",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 812,
                        "end": 824,
                        "text": "(Riek, 2012)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Annotated Corpora",
                "sec_num": "2.2"
            },
            {
                "text": "In addition to in-domain evaluation, we extend evaluation of the Dialogue-AMR schema and NLU pipeline by annotating and testing on the Minecraft Dialogue Corpus (Narayan-Chen et al., 2019) . This corpus consists of 509 conversations and game logs, in which two humans communicate via the Minecraft gaming interface chat window while collaboratively building blocks structures. Standard-AMR annotations for the Minecraft corpus (Bonn et al., 2020) were obtained from the developers via a private data-sharing agreement. Our addition of Dialogue-AMR annotations to this corpus is described in \u00a76.1.",
                "cite_spans": [
                    {
                        "start": 161,
                        "end": 188,
                        "text": "(Narayan-Chen et al., 2019)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 427,
                        "end": 446,
                        "text": "(Bonn et al., 2020)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Annotated Corpora",
                "sec_num": "2.2"
            },
            {
                "text": "We adopt and evaluate the two-step NLU pipeline described in and Bonial et al. (2019) , including both a Standard-AMR parser and a system for converting this into Dialogue-AMR. We describe our selection of an initial Standard-AMR parser and conversion system, both of which we retrain and improve upon, below.",
                "cite_spans": [
                    {
                        "start": 65,
                        "end": 85,
                        "text": "Bonial et al. (2019)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Approach: Two-Step NLU Pipeline",
                "sec_num": "3"
            },
            {
                "text": "Standard-AMR provides an initial interpretation of an utterance to be transferred to the Dialogue-AMR. Therefore, an effective Standard-AMR parser is critical for the overall success of the NLU pipeline.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Standard-AMR Retrained Parser",
                "sec_num": "3.1"
            },
            {
                "text": "We considered several open-source AMR parsers as candidates, and selected two recent releases, the parsers described in Zhang et al. (2019) and Lindemann et al. (2019) , which both make use of BERT embeddings (Devlin et al., 2019) and were evaluated on AMR releases, thus providing us with baselines to compare them to each other and to assess our retrained models against their reported performances.",
                "cite_spans": [
                    {
                        "start": 120,
                        "end": 139,
                        "text": "Zhang et al. (2019)",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 144,
                        "end": 167,
                        "text": "Lindemann et al. (2019)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 209,
                        "end": 230,
                        "text": "(Devlin et al., 2019)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Standard-AMR Retrained Parser",
                "sec_num": "3.1"
            },
            {
                "text": "We were able to retrain both of these state-of-theart AMR parsers on the AMR 2.0 corpus and the recently released AMR 3.0 corpus (a larger corpus including the 2.0 data), and then also retrain them on each of these individual releases of Standard-AMR together with the Standard-AMR subset of the DialAMR corpus of over 800 Standard-AMRs, to adapt them to our human-robot dialogue domain. We evaluated these particular combinations of training data because we wanted to explore whether or not the larger set of data in the AMR 3.0 corpus improved performance on the human-robot dialogue domain, or if it further washed out the distinctions from our smaller in-domain corpus. This yielded a total of eight parsers (see Table 1 ) for us to evaluate and select from for the purpose of then including in the full NLU parsing pipeline.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 717,
                        "end": 724,
                        "text": "Table 1",
                        "ref_id": "TABREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Standard-AMR Retrained Parser",
                "sec_num": "3.1"
            },
            {
                "text": "The next step in the NLU pipeline is a graph-tograph conversion system that uses the input of the utterance text and the Standard-AMR graph to create a Dialogue-AMR graph. We leverage an existing conversion system, \"Abrams+\", and experiment with improvements to how it classifies the robotconcept relation in our own updated graph-to-graph conversion system, \"G2G\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conversion System",
                "sec_num": "3.2"
            },
            {
                "text": "We obtained a version of the conversion system described in , which had been updated by that author in two ways: i. expanded to handle the additional speech acts and robotconcept relation categories of the full Dialogue-AMR schema outlined in , not all of which were present during the original development, and ii. shifted from a Na\u00efve Bayes to a SVM model for speech act classification. We refer to this system as \"Abrams+\". This graph-to-graph conversion system implements both rule-based and classifier-based methods in converting a Standard-AMR graph into a Dialogue-AMR graph, and leverages the original utterance and the structure of the Standard-AMR to produce the final Dialogue-AMR, which includes the speech act, tense and aspect information, and a designation of the robotconcept relation. As we use this system as our starting point for improvement, we will briefly describe how each of these additions are made in the order just listed, but refer the reader to Abrams et al. (2020) for full details.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abrams+ Conversion",
                "sec_num": "3.2.1"
            },
            {
                "text": "Following the numbering of the example in Fig. 2 , the first step in the transformation process employs a SVM model with token unigrams features to predict the speech act from the original utterance-critical information for human-robot communication that cannot be gleaned from the Standard-AMR graphs alone. 3 After classification, the speech act label is then stored as a slot to be added to the Dialogue-AMR graph and referenced for decision-making processes downstream. Second, to add behavior time, another classifiera Na\u00efve Bayes model using token unigrams as features-determines if the event corresponding to the robot behavior pertains to a past, present, or future action. Third, designation of the robot behavior is implemented through a keyword-based approach, which extracts the top root relation (keyword) in the Standard-AMR and checks it against a keyword dictionary of similar actions, and maps it to a single robot-concept relation. Fourth, particu- 3 We acknowledge that the interpretation of speech acts, and indirect speech acts in particular, can be affected by context. Following (Hinkelman and Allen, 1989) , we start with only the linguistic signal in the first phase. Since the restricted domain is predictable, it is usually sufficient, but further research aims to leverage situational information and dialogue context where necessary, e.g., to disambiguate an ability question from an indirect instruction. lar combinations of speech act, tense, and the presence or absence of certain arguments of the robotconcept relation trigger an aspectual label that corresponds to an action's completion status. In the final step of transformation process, the system's rule-based methods use pattern matching techniques to serve multiple functions, including slot filling and slot changing (e.g., transforming mentions of you to the fixed role of the addressee in Dialogue-AMR).",
                "cite_spans": [
                    {
                        "start": 967,
                        "end": 968,
                        "text": "3",
                        "ref_id": null
                    },
                    {
                        "start": 1102,
                        "end": 1129,
                        "text": "(Hinkelman and Allen, 1989)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 42,
                        "end": 48,
                        "text": "Fig. 2",
                        "ref_id": "FIGREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Abrams+ Conversion",
                "sec_num": "3.2.1"
            },
            {
                "text": "While we hypothesize speech act, tense, and aspect classification may be fairly robust to language in a new domain, we readily acknowledge that new domains will require the robot to engage in novel behaviors, for example, BUILDING in the Minecraft domain. Thus, although there are many different aspects of the conversion system that we could attempt to improve upon (e.g., classifier types, ordering of components), we saw an opportunity to have the most impact on system performance in multiple domains by focusing on varying the robotconcept relation classification approach. We describe three variants (one keyword-based and two classifier-based) of our updated G2G conversion system below.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G2G: Our Updated Conversion System",
                "sec_num": "3.2.2"
            },
            {
                "text": "G2G Expanded Keyword-Based Variant We expanded upon the keyword approach of the Abrams+ system, which was restricted to searching for keyword matches with the top, root relation of the Standard-AMR. We found that this restriction was problematic because the same root relation in the Standard-AMR could correspond to multiple robot-concept relations. Move and go, generally parsed as move-01 and go-02, are particularly prevalent and could correspond to either front-back MOVEMENT or a ROTATION behavior; both of these were keywords triggering front-back movement in Abrams+, which therefore incorrectly categorized utterances like Move right 45 degrees (a ROTATION behavior). In our expansion, the G2G keyword variant searches for matches within all utterance tokens, AMR relations, and arguments. Furthermore, the keyword dictionary was informed by a data-driven analysis in which we created histograms of all utterance tokens and Standard-AMR relations within an instance mapped to a particular robot-concept relation in the manual Dialogue-AMR annotations. In this way, we could see which words and relations occurred with multiple robotconcept relations, like move-01, and therefore remove these from our keyword dictionaries, while adding keywords that are unique to a particular robot-concept relation in the data, such as degrees, which consistently cues a ROTATION behavior.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G2G: Our Updated Conversion System",
                "sec_num": "3.2.2"
            },
            {
                "text": "G2G One-Hot and GloVe Variants We also experimented with classifier-based approaches to robot-behavior classification, which we hypothesized may be more efficient to extend to a new domain than a keyword-based approach. The classifiers are Support Vector Machines with different vectorization methods including one-hot encoding and word embeddings from GloVe. Training data for the robot-concept relation classifier comes from examples of each robot-concept category in Bonial et al. 2020, gold-standard labels from the Continuous-Trial subset utterances 101-305 (those not used in a held-out test set), and examples pulled from speech act classifier training bins. There are a total of 26 labels for this task, and while many of the movement actions were abundant from these other sources, some of the minority labels (e.g., equip-01, wait-01, clarify-10) required up-sampling to balance training proportions.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "G2G: Our Updated Conversion System",
                "sec_num": "3.2.2"
            },
            {
                "text": "We evaluated the retrained parsers on the SCOUT Continuous-trial dataset. We note substantial improvement in Standard-AMR parsing Smatch scores on this set when training with DialAMR in addition to the base training sets (AMR 2.0 and 3.0). 4 Results for the AMR parsing models are presented in Table 1 . The noticeably high scores on the parsers retrained on the AMR 3.0 + DialAMR is due in large part to the nature of the speakers' language in the SCOUT corpus and the high levels of similarity in participants' instructions to the robot. This underscores how critical evaluation in another dialogue domain is. We note that, at the segment level as well as can be seen in the Table 1, the Lindemann et al. (2019) parser retrained with DialAMR data evaluated across-the-board to higher scores than the comparably retrained Zhang et al. (2019) parser. Of those two Lindemann et al. (2019) parsers whose Smatch scores did not differ significantly, we selected the one trained with the larger 3.0 dataset with its larger language model as the first component in the full parsing pipeline. ",
                "cite_spans": [
                    {
                        "start": 823,
                        "end": 842,
                        "text": "Zhang et al. (2019)",
                        "ref_id": "BIBREF22"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 294,
                        "end": 301,
                        "text": "Table 1",
                        "ref_id": "TABREF0"
                    }
                ],
                "eq_spans": [],
                "section": "In-Domain Standard-AMR Parsing",
                "sec_num": "4.1"
            },
            {
                "text": "To pinpoint the performance of the conversion system alone (without error introduced by the automatic Standard-AMR parsing), we report results with gold-standard, manually assigned input Standard-AMR parses. Results are summarized in Evaluation Domain A of Table 2 . Focusing initially on the overall Smatch Precision, Recall, and F-scores of the conversion system, our updated system, G2G, leveraging the classifier with one-hot vectorization achieves the highest precision (.85) and F-score (.83) in our domain. All approaches perform comparably overall, especially given that Smatch scores can vary slightly (Opitz et al., 2020) because Smatch is a non-deterministic, greedy hillclimbing algorithm with a preset, default number of random restarts (Cai and Knight, 2013) .",
                "cite_spans": [
                    {
                        "start": 611,
                        "end": 631,
                        "text": "(Opitz et al., 2020)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 750,
                        "end": 772,
                        "text": "(Cai and Knight, 2013)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 257,
                        "end": 264,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "In-Domain Conversion to Dialogue-AMR",
                "sec_num": "4.2"
            },
            {
                "text": "Drilling down into the accuracy of the individual component classification tasks, we find accuracy scores of 1.00 for speech acts, .93 for tense, and .93 for aspect across all system variants, as these components are unchanged, and we only alter the robot-concept classification. Again, we note that these accuracy scores are extremely high, given the repetitive nature of the language and prevalence of certain types of commands and feedback assertions. For robot-concept classification, the G2G expanded keyword approach (.97 accuracy) does outperform the Abrams+ baseline keyword method (.94 accuracy). Both keyword approaches outperform the G2G classifier-based approaches: one-hot vectorization achieves an accuracy of .90 and GloVe an accuracy of .84. Notably, higher accuracy on the robot-concept classification task does not necessarily translate to higher Smatch Fscores overall. High component accuracy but lower overall F-Score generally indicates that while the system is correctly determining all of the information being added to the Dialogue-AMR, it is not always putting these pieces together correctly. In other words, the final step in the conversion system, where slots are captured and changed from the original Standard-AMR structure to the structure of the Dialogue-AMR, is where some of the error reflected in Smatch scores stems from.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "In-Domain Conversion to Dialogue-AMR",
                "sec_num": "4.2"
            },
            {
                "text": "In this section, we report on the Minecraft domain performance of the NLU pipeline with the retrained Standard-AMR parser, the Abrams+ conversion system, and our updated G2G system variants prior to any domain adaptation in order to determine how vital domain extension really is in somewhat similar instruction-giving domains. Given that theoretically speech acts, tense and aspect are somewhat consistent in language regardless of the domain, we hypothesize that these features of our annotation schema and the components of the conversion system capturing them will perform reasonably well on the new Minecraft dialogue domain. However, the main actions or behaviors involved in the collaboration of interlocutors in the original search and navigation domain are quite different from those of building virtual structures from blocks in the new Minecraft domain. We therefore expect that the conversion system will fail to correctly map many of the main action predicates in the Minecraft dialogues to an executable robot behavior. However, we accept this as an interesting question of domain extension for moving our robot to a new task: Is it more efficient to expand a rule-based approach for capturing these new behaviors, or to use a classifierbased approach?",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Minecraft Domain Evaluation",
                "sec_num": "5"
            },
            {
                "text": "We test the parser selected as the first pipeline component (described in \u00a74.1) on Minecraft data, scor-ing the parser output on 100 sequential instances of Minecraft dialogue against manually assigned Standard-AMR annotations. 5 The overall Smatch F-score is .57, with a Precision of .63 and Recall of .52. Thus, despite the potential similarity in the two instruction-giving dialogue domains, it is clear that the automatic parsing performance is significantly worse for the Minecraft data than our original domain (where the best Smatch F-score was .93). Error analysis reveals some extremely complicated language phenomena, including dimensions and frequency expressions capturing, for example, the repetition of a placement action: For the four squares that come out from the middle blocks, add two blue blocks on. Although this indicates that the parser would benefit from retraining with Minecraft data, 6 in our immediate research we focus on domain extension of the conversion system in order to explore how robust the conversion system might be to noise in the parser input.",
                "cite_spans": [
                    {
                        "start": 228,
                        "end": 229,
                        "text": "5",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Minecraft Standard-AMR Parsing",
                "sec_num": "5.1"
            },
            {
                "text": "This evaluation compares the conversion system output against manually assigned Dialogue-AMRs for the same 100-instance, sequential subset of utterances from the Minecraft corpus used as the test set for the Standard-AMR parser (see \u00a76.1 for Dialogue-AMR annotation details); again, we use gold-standard, manually assigned Standard-AMR parses as input to the conversion system. Results are summarized in Evaluation Domain B of Ta-ble 2. Focusing first on overall Smatch scores, our updated system variant leveraging the expanded keyword approach performs slightly better (.68 Fscore) than both the baseline Abrams+ (.67 F-score) and the classifier-based approaches (.67 F-scores).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Minecraft Conversion to Dialogue-AMR",
                "sec_num": "5.2"
            },
            {
                "text": "Although the scores have dropped about 15 points from the original domain, they remain comparable across variants. When drilling down into the accuracy of the individual components of the conversion system, we find that robot concept classification yields the lowest accuracy scores, with a range of .20-.32. Among the variant approaches to robot-concept classification explored, the expanded keyword approach achieves the highest accuracy. The speech act and tense have the same accuracy scores across all versions, .44 and .56, respectively, since these classifiers are stable within the system variants. In this evaluation, aspect varies slightly across approaches as it depends on combinations of speech act and robot-concept relation slot values-its accuracy ranges from .25-.49, with the Abrams+ variant obtaining the highest result. Thus, we see that our hypothesis that speech act, tense, and aspect classification may be fairly robust to a new domain is partially confirmed: robot-concept classification is certainly the most challenging with the lowest accuracy, but the performance of all components is significantly worse than the original domain, suggesting more widespread differences in the language of the two domains.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Minecraft Conversion to Dialogue-AMR",
                "sec_num": "5.2"
            },
            {
                "text": "Here, we describe the small amount of domain extension done to tailor our G2G conversion system to the Minecraft domain, beginning with extensions of the annotation schema itself.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Domain Extension",
                "sec_num": "6"
            },
            {
                "text": "One expert Standard-AMR and Dialogue-AMR annotator provided manual Dialogue-AMR annotations to a continuous 100-instance subset of the Minecraft corpus to serve as a test set. This was done by manually augmenting the Standard-AMR release of the Minecraft corpus, maintaining all of the Standard-AMR annotation choices. Additionally, a separate, continuous 200-instance subset of the data was annotated with speech acts and the corresponding robot-concept relations of Dialogue-AMR to serve as training data for the speech act classifier and robot-concept relation classification. 7 In providing the manual Dialogue-AMR annotation of the Minecraft data, we noted several changes and additions that needed to be made to the annotation schema to account for novel concepts arising in the collaborative building domain, as well as novel dialogue phenomena. First, as expected, we added agent behaviors that would be needed for this domain: BUILDING, represented with the relation build-01 (e.g., What are we building this time?), and PLACING, represented with the relation move-01 (e.g., Please place two red blocks on top of each side...).",
                "cite_spans": [
                    {
                        "start": 580,
                        "end": 581,
                        "text": "7",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extending Dialogue-AMR Schema",
                "sec_num": "6.1"
            },
            {
                "text": "Second, we noted novel dialogue phenomena that we had not observed in the SCOUT data. Speech acts were often nested in this data, such that the content of one speech act was not a typical agent behavior (e.g., a speech act of commanding a ROTATION behavior), but instead another speech act. For example, there were frequent requests for evaluation, often after each building step was completed: How's this? and Is this good? 8 As a result, we had to shift our annotation schema and conversion system in order to allow for speech act relations to sit where we would normally expect the robot-concept relation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extending Dialogue-AMR Schema",
                "sec_num": "6.1"
            },
            {
                "text": "Finally, we noted frequent use of the verb need as an indicator of a less direct command in the Minecraft data: This will need to be placed as far right as you can.... This was interpreted by the interlocutor as a command, i.e., Place this as far right as you can. Thus, the need relation that roots the Standard-AMR ultimately mapped to the command-SA relation of the Dialogue AMR. This phenomenon has significant ramifications for the conversion system, as it was generally assumed, for the SCOUT data, that the utterance and Standard-AMR provides propositional content cuing the robot-concept relation, but we did not expect AMR relations corresponding to the speech act in our 7 Contact the first author for Minecraft Dialogue-AMR annotations used for train/test.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extending Dialogue-AMR Schema",
                "sec_num": "6.1"
            },
            {
                "text": "8 Following Bunt et al. (2012) , Dialogue-AMR speech acts are distinguished between Information Transfer Functions and Action Discussion Functions. Thus, while syntactically questions, cases such as How's this? are not annotated using the Dialogue-AMR Question speech act, which is reserved for questions that obligate the addressee to introduce new information content into the conversation and demonstrate a commitment to the answer assertion (Traum, 2003) . In contrast, these cases obligate the addressee to evaluate the current state of play while simultaneously providing feedback that common conversational ground has been achieved with respect to the desired structure. Indeed, common responses such as Excellent, Builder do not fit with a question interpretation. domain, although plausible (e.g., I command you to move forward).",
                "cite_spans": [
                    {
                        "start": 12,
                        "end": 30,
                        "text": "Bunt et al. (2012)",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 445,
                        "end": 458,
                        "text": "(Traum, 2003)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extending Dialogue-AMR Schema",
                "sec_num": "6.1"
            },
            {
                "text": "We added to our expanded keyword dictionary to test the effectiveness of a rule-based approach in domain extension. Only two additional concepts were required, build-01 and move-01, but these robot concepts are extremely prevalent in the data. Additionally, in order to test how well a classifier-based approach would capture new behaviors and extend the conversion system to a new domain, we retrained the robot-concept classifier on 166 new manually-annotated training instances of robot concepts from the Minecraft domain. Domain extension also included retraining the speech act classifier on 224 speech acts found in 200 instances of manually annotated Minecraft data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Extending Robot-Concept Classification",
                "sec_num": "6.2"
            },
            {
                "text": "After domain extension, the G2G variant leveraging the one-hot classifier (.71 F-score) very slightly outperforms the keyword (.70 F-score) and GloVe variants (.70 F-score) (again, comparing system output against manually assigned Dialogue-AMRs for the continuous, 100-instance Minecraft test set).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Domain-Extended G2G Evaluation",
                "sec_num": "6.3"
            },
            {
                "text": "Results are summarized in the bottom three rows of Evaluation Domain B of Table 2 . The scores remain comparable across all three variants, but we do see improvement overall when comparing against system variants prior to domain extension.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 74,
                        "end": 81,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Domain-Extended G2G Evaluation",
                "sec_num": "6.3"
            },
            {
                "text": "Turning to analysis of the accuracy of individual components of the conversion system, the additional training instances improve speech act classification (from .44 prior to retraining to .57 after) and robot-concept classification for the Minecraft domain. Prior to domain extension, the expanded keyword variant achieved the highest accuracy for robot-concept classification (.32), but classifierbased methods with more training data outperform even a domain-extended, data-driven keyword approach, which achieves an accuracy of .41, while one-hot vectorization achieves an accuracy of .54 and GloVe .45. Error analysis reveals that the keyword-based approach struggles to classify robot concepts in this domain, in part, because of language that contains vocatives (e.g. Excellent, builder)-which triggers a top say-01 relation in the Standard-AMR graph-and various uses of need, which trigger a need-01 relation. As noted in the discussion of domain extension of the annotation schema ( \u00a76.1), both of these root relations do not cue any domain robot concept, but rather provide information about speech acts and speaker/listener roles, which were consistently implicit in our original domain. Thus, we are currently updating the system to allow for certain relations in the Standard-AMR (e.g., need-01) to cue for or map to particular speech acts (e.g., command-SA).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Domain-Extended G2G Evaluation",
                "sec_num": "6.3"
            },
            {
                "text": "This demonstrates a weakness of the keywordbased approach in general: unforeseen linguistic phenomena such as vocatives can strongly affect the accuracy of this approach, while the classifier approach is more robust to these differences since it considers all tokens in the utterance for robot-concept relation prediction, thereby avoiding mis-classification due to this kind of \"noise\" in the data. When considering our earlier hypothesis that the classifier-based approach to robot-concept classification would be more efficient to extend to a new domain than the keyword-based approach, the results and error analysis here provide modest support for this hypothesis. Both approaches are similarly time-efficient as far as the initial extension efforts are concerned: the keyword approach requires manual observation of the data and subsequent selection and addition of keywords to the dictionaries associated with certain robot-concept relations, while the classifier approach requires some additional manual annotation in the new domain. However, empirically the classifier-based approach slightly outperforms the keyword-based approach in the Minecraft domain, and extending the keyword-based approach requires additional changes in traversal of the graph in order to find the appropriate concept to serve as the keyword for matching, so the effort necessarily goes beyond merely selecting and adding keywords.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Domain-Extended G2G Evaluation",
                "sec_num": "6.3"
            },
            {
                "text": "Turning back to our original SCOUT test set after Minecraft domain extension (results summarized in the bottom three rows of Evaluation Domain A in Table 2 ), we find that tailoring the conversion system to Minecraft and expanding the coverage of language that the system can handle has little negative effect on performance in our original domain. We see comparable results for the classifier-based model using one-hot vectorization, maintaining an F-score of .83, which was also the best-performing model for the original domain.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 148,
                        "end": 155,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Domain-Extended G2G Evaluation",
                "sec_num": "6.3"
            },
            {
                "text": "In order to scale up to real-time use, the two-step NLU pipeline will leverage the retrained automatic Standard-AMR parser described in \u00a73.1; however, up to this point we have reported conversion system results using manually obtained, gold-standard Standard-AMR parses in order to explore the validity of our conversion system approaches without the noise from parsing. Table 3 summarizes the performance of the overall best-performing (across both Smatch scores and component accuracy) expanded keyword and one-hot vectorization classifier G2G variants, after domain extension, given Standard-AMR input from the parser. The expanded keyword variant is the best-performing model with automatic input, but the scores are close. Although the Smatch F-score has dropped from .71 (with gold-standard input) to .59, we still find this to be very encouraging performance, given the challenges of semantic parsing in a new domain. ",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 371,
                        "end": 378,
                        "text": "Table 3",
                        "ref_id": "TABREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Full Automatic Pipeline Evaluation",
                "sec_num": "6.4"
            },
            {
                "text": "This research is part of a growing body of work in representing various levels of interpretation in existing meaning representation frameworks, and in AMR in particular. We briefly note especially relevant work here. Bastianelli et al. (2014) present their Human Robot Interaction Corpus (HuRIC) following the same Penman Notation (Penman Natural Language Group, 1989) syntax of AMR, but significantly altering AMR to use the sense distinctions and semantic role labels of FrameNet (Fillmore et al., 2012), thereby rendering the use of automatic parsers trained on AMR data challenging. Shen (2018) presents a small corpus (266 instances) of manually annotated AMRs for spoken language to explore the validity of using AMR for spoken language understanding, with promising results but noting that additional data is needed. There is also a neural AMR graph converter for abstractive summarization (producing summary graphs from source graphs) (Liu et al., 2015) ; however, neural approaches require substantial training data in the form of annotated input and output graphs. The current motivation for the multi-step approach explored here is to handle a low resource problem, as we lack sufficient data to experiment with employing a neural network.",
                "cite_spans": [
                    {
                        "start": 217,
                        "end": 242,
                        "text": "Bastianelli et al. (2014)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 943,
                        "end": 961,
                        "text": "(Liu et al., 2015)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "7"
            },
            {
                "text": "This paper evaluates and improves upon a two-step NLU pipeline that gradually tames the variation of language so that it can be understood and acted upon by a robot with a limited repertoire of domain concepts and behaviors. After enumerating the extensions needed for the annotation schema itself and contributing a dataset of Dialogue-AMR for the new Minecraft collaborative dialogue domain, we achieve promising results with roughly 200 instances of training data. We have integrated our updated pipeline into a software stack for a physical robot and are now performing a series of experiments where we use the same dialogue-management system, but vary the NLU component in order to compare task success with the two-step NLU pipeline against a baseline NLU system with a simple syntactic parser. We hypothesize that the NLU pipeline described here, and the deeper semantics of Dialogue-AMR specifically, will be especially advantageous for tracking and grounding user utterances involving coreference (e.g., Go to the sign and send a picture of it.), light verb constructions, which AMR represents identically to parallel synthetic verbs (e.g., make a left turn; turn left), negation (e.g., no, not the door on the right, the left!), and complex, nested prepositions (e.g., move through the doorway in front of you on the left)-all utterances where a simple syntactic parse has been found to lack information needed for interpretation of the intent and grounding. The extrinsic evaluation will also provide an opportunity to explore whether or not the conversion system variant with the best overall Smatch scores corresponds to the best real-world performance, or if we should consider other metrics, such as S 2 match (Opitz et al., 2020) and SemBleu (Song and Gildea, 2019) . As our results did not demonstrate a clear \"best\" rule-based, keyword or classifier approach to domain extension, we will continue to experiment with all three variants and consider which is the most time-efficient to extend, either by adding to the keyword dictionary or adding annotations. Overall, we are optimistic that the semantic representation of Dialogue-AMR, which provides a deeper understanding of both what a person said and what they really meant in the conversational context, will enhance human-robot collaboration.",
                "cite_spans": [
                    {
                        "start": 1725,
                        "end": 1745,
                        "text": "(Opitz et al., 2020)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 1758,
                        "end": 1781,
                        "text": "(Song and Gildea, 2019)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions & Future Work",
                "sec_num": "8"
            },
            {
                "text": "Smatch is an evaluation algorithm for scoring AMR graphs(Cai and Knight, 2013).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "The Minecraft AMR corpus includes AMRs for the locations of blocks (expressed as Cartesian coordinates) as each movement takes place; because our focus is natural language dialogue, we removed these instances from our test set.6 Bonn et al.(2020)report an F-score of .66 on a Minecraft test set after retraining the Zhang et al. (2019) parser on Minecraft data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Graph-to-graph meaning representation transformations for human-robot dialogue",
                "authors": [
                    {
                        "first": "Mitchell",
                        "middle": [],
                        "last": "Abrams",
                        "suffix": ""
                    },
                    {
                        "first": "Claire",
                        "middle": [],
                        "last": "Bonial",
                        "suffix": ""
                    },
                    {
                        "first": "Lucia",
                        "middle": [],
                        "last": "Donatelli",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Proceedings of the Society for Computation in Linguistics 2020",
                "volume": "",
                "issue": "",
                "pages": "250--253",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mitchell Abrams, Claire Bonial, and Lucia Donatelli. 2020. Graph-to-graph meaning representation trans- formations for human-robot dialogue. In Proceed- ings of the Society for Computation in Linguistics 2020, pages 250-253, New York, New York. Asso- ciation for Computational Linguistics.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "HuRIC: a human robot interaction corpus",
                "authors": [
                    {
                        "first": "Emanuele",
                        "middle": [],
                        "last": "Bastianelli",
                        "suffix": ""
                    },
                    {
                        "first": "Giuseppe",
                        "middle": [],
                        "last": "Castellucci",
                        "suffix": ""
                    },
                    {
                        "first": "Danilo",
                        "middle": [],
                        "last": "Croce",
                        "suffix": ""
                    },
                    {
                        "first": "Luca",
                        "middle": [],
                        "last": "Iocchi",
                        "suffix": ""
                    },
                    {
                        "first": "Roberto",
                        "middle": [],
                        "last": "Basili",
                        "suffix": ""
                    },
                    {
                        "first": "Daniele",
                        "middle": [],
                        "last": "Nardi",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "LREC",
                "volume": "",
                "issue": "",
                "pages": "4519--4526",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Emanuele Bastianelli, Giuseppe Castellucci, Danilo Croce, Luca Iocchi, Roberto Basili, and Daniele Nardi. 2014. HuRIC: a human robot interaction cor- pus. In LREC, pages 4519-4526.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Dialogue-AMR: Abstract Meaning Representation for dialogue",
                "authors": [
                    {
                        "first": "Claire",
                        "middle": [],
                        "last": "Bonial",
                        "suffix": ""
                    },
                    {
                        "first": "Lucia",
                        "middle": [],
                        "last": "Donatelli",
                        "suffix": ""
                    },
                    {
                        "first": "Mitchell",
                        "middle": [],
                        "last": "Abrams",
                        "suffix": ""
                    },
                    {
                        "first": "Stephanie",
                        "middle": [
                            "M"
                        ],
                        "last": "Lukin",
                        "suffix": ""
                    },
                    {
                        "first": "Stephen",
                        "middle": [],
                        "last": "Tratz",
                        "suffix": ""
                    },
                    {
                        "first": "Matthew",
                        "middle": [],
                        "last": "Marge",
                        "suffix": ""
                    },
                    {
                        "first": "Ron",
                        "middle": [],
                        "last": "Artstein",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Traum",
                        "suffix": ""
                    },
                    {
                        "first": "Clare",
                        "middle": [],
                        "last": "Voss",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
                "volume": "",
                "issue": "",
                "pages": "684--695",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Claire Bonial, Lucia Donatelli, Mitchell Abrams, Stephanie M. Lukin, Stephen Tratz, Matthew Marge, Ron Artstein, David Traum, and Clare Voss. 2020. Dialogue-AMR: Abstract Meaning Representation for dialogue. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 684-695, Marseille, France. European Language Re- sources Association.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Augmenting Abstract Meaning Representation for human-robot dialogue",
                "authors": [
                    {
                        "first": "Claire",
                        "middle": [],
                        "last": "Bonial",
                        "suffix": ""
                    },
                    {
                        "first": "Lucia",
                        "middle": [],
                        "last": "Donatelli",
                        "suffix": ""
                    },
                    {
                        "first": "Stephanie",
                        "middle": [
                            "M"
                        ],
                        "last": "Lukin",
                        "suffix": ""
                    },
                    {
                        "first": "Stephen",
                        "middle": [],
                        "last": "Tratz",
                        "suffix": ""
                    },
                    {
                        "first": "Ron",
                        "middle": [],
                        "last": "Artstein",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Traum",
                        "suffix": ""
                    },
                    {
                        "first": "Clare",
                        "middle": [],
                        "last": "Voss",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the First International Workshop on Designing Meaning Representations",
                "volume": "",
                "issue": "",
                "pages": "199--210",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/W19-3322"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Claire Bonial, Lucia Donatelli, Stephanie M. Lukin, Stephen Tratz, Ron Artstein, David Traum, and Clare Voss. 2019. Augmenting Abstract Meaning Representation for human-robot dialogue. In Pro- ceedings of the First International Workshop on De- signing Meaning Representations, pages 199-210, Florence, Italy. Association for Computational Lin- guistics.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Spatial AMR: Expanded spatial annotation in the context of a grounded Minecraft corpus",
                "authors": [
                    {
                        "first": "Julia",
                        "middle": [],
                        "last": "Bonn",
                        "suffix": ""
                    },
                    {
                        "first": "Martha",
                        "middle": [],
                        "last": "Palmer",
                        "suffix": ""
                    },
                    {
                        "first": "Zheng",
                        "middle": [],
                        "last": "Cai",
                        "suffix": ""
                    },
                    {
                        "first": "Kristin",
                        "middle": [],
                        "last": "Wright-Bettner",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Proceedings of the 12th Language Resources and Evaluation Conference",
                "volume": "",
                "issue": "",
                "pages": "4883--4892",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Julia Bonn, Martha Palmer, Zheng Cai, and Kristin Wright-Bettner. 2020. Spatial AMR: Expanded spatial annotation in the context of a grounded Minecraft corpus. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 4883-4892, Marseille, France. European Language Resources Association.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "ISO 24617-2: A semantically-based standard for dialogue annotation",
                "authors": [
                    {
                        "first": "Harry",
                        "middle": [],
                        "last": "Bunt",
                        "suffix": ""
                    },
                    {
                        "first": "Jan",
                        "middle": [],
                        "last": "Alexandersson",
                        "suffix": ""
                    },
                    {
                        "first": "Jae-Woong",
                        "middle": [],
                        "last": "Choe",
                        "suffix": ""
                    },
                    {
                        "first": "Alex",
                        "middle": [
                            "Chengyu"
                        ],
                        "last": "Fang",
                        "suffix": ""
                    },
                    {
                        "first": "Koiti",
                        "middle": [],
                        "last": "Hasida",
                        "suffix": ""
                    },
                    {
                        "first": "Volha",
                        "middle": [],
                        "last": "Petukhova",
                        "suffix": ""
                    },
                    {
                        "first": "Andrei",
                        "middle": [],
                        "last": "Popescu-Belis",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Traum",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)",
                "volume": "",
                "issue": "",
                "pages": "430--437",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Harry Bunt, Jan Alexandersson, Jae-Woong Choe, Alex Chengyu Fang, Koiti Hasida, Volha Petukhova, Andrei Popescu-Belis, and David Traum. 2012. ISO 24617-2: A semantically-based standard for dia- logue annotation. In Proceedings of the Eighth In- ternational Conference on Language Resources and Evaluation (LREC'12), pages 430-437.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Smatch: an evaluation metric for semantic feature structures",
                "authors": [
                    {
                        "first": "Shu",
                        "middle": [],
                        "last": "Cai",
                        "suffix": ""
                    },
                    {
                        "first": "Kevin",
                        "middle": [],
                        "last": "Knight",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
                "volume": "2",
                "issue": "",
                "pages": "748--752",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Shu Cai and Kevin Knight. 2013. Smatch: an evalua- tion metric for semantic feature structures. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 748-752, Sofia, Bulgaria. Associa- tion for Computational Linguistics.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
                "authors": [
                    {
                        "first": "Jacob",
                        "middle": [],
                        "last": "Devlin",
                        "suffix": ""
                    },
                    {
                        "first": "Ming-Wei",
                        "middle": [],
                        "last": "Chang",
                        "suffix": ""
                    },
                    {
                        "first": "Kenton",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    },
                    {
                        "first": "Kristina",
                        "middle": [],
                        "last": "Toutanova",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "1",
                "issue": "",
                "pages": "4171--4186",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/N19-1423"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Annotation of tense and aspect semantics for sentential AMR",
                "authors": [
                    {
                        "first": "Lucia",
                        "middle": [],
                        "last": "Donatelli",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Regan",
                        "suffix": ""
                    },
                    {
                        "first": "William",
                        "middle": [],
                        "last": "Croft",
                        "suffix": ""
                    },
                    {
                        "first": "Nathan",
                        "middle": [],
                        "last": "Schneider",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions",
                "volume": "",
                "issue": "",
                "pages": "96--108",
                "other_ids": {
                    "DOI": [
                        "10.1207/s15516709cog0303_1"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Lucia Donatelli, Michael Regan, William Croft, and Nathan Schneider. 2018. Annotation of tense and as- pect semantics for sentential AMR. In Proceedings of the Joint Workshop on Linguistic Annotation, Mul- tiword Expressions and Constructions (LAW-MWE- CxG-2018), pages 96-108.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "The FrameNet Constructicon. Signbased construction grammar",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Charles",
                        "suffix": ""
                    },
                    {
                        "first": "Russell",
                        "middle": [],
                        "last": "Fillmore",
                        "suffix": ""
                    },
                    {
                        "first": "Russell",
                        "middle": [],
                        "last": "Lee-Goldman",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Rhodes",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "309--372",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Charles J Fillmore, Russell Lee-Goldman, and Russell Rhodes. 2012. The FrameNet Constructicon. Sign- based construction grammar, pages 309-372.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Two constraints on speech act ambiguity",
                "authors": [
                    {
                        "first": "Elizabeth",
                        "middle": [],
                        "last": "Hinkelman",
                        "suffix": ""
                    },
                    {
                        "first": "James",
                        "middle": [],
                        "last": "Allen",
                        "suffix": ""
                    }
                ],
                "year": 1989,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Elizabeth Hinkelman and James Allen. 1989. Two con- straints on speech act ambiguity.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Compositional semantic parsing across graphbanks",
                "authors": [
                    {
                        "first": "Matthias",
                        "middle": [],
                        "last": "Lindemann",
                        "suffix": ""
                    },
                    {
                        "first": "Jonas",
                        "middle": [],
                        "last": "Groschwitz",
                        "suffix": ""
                    },
                    {
                        "first": "Alexander",
                        "middle": [],
                        "last": "Koller",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "4576--4585",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/P19-1450"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Matthias Lindemann, Jonas Groschwitz, and Alexan- der Koller. 2019. Compositional semantic parsing across graphbanks. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 4576-4585, Florence, Italy. Asso- ciation for Computational Linguistics.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Toward abstractive summarization using semantic representations",
                "authors": [
                    {
                        "first": "Fei",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "Jeffrey",
                        "middle": [],
                        "last": "Flanigan",
                        "suffix": ""
                    },
                    {
                        "first": "Sam",
                        "middle": [],
                        "last": "Thomson",
                        "suffix": ""
                    },
                    {
                        "first": "Norman",
                        "middle": [],
                        "last": "Sadeh",
                        "suffix": ""
                    },
                    {
                        "first": "Noah A",
                        "middle": [],
                        "last": "Smith",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, and Noah A Smith. 2015. Toward abstrac- tive summarization using semantic representations. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Applying the Wizard-of-Oz technique to multimodal human-robot dialogue",
                "authors": [
                    {
                        "first": "Matthew",
                        "middle": [],
                        "last": "Marge",
                        "suffix": ""
                    },
                    {
                        "first": "Claire",
                        "middle": [],
                        "last": "Bonial",
                        "suffix": ""
                    },
                    {
                        "first": "Brendan",
                        "middle": [],
                        "last": "Byrne",
                        "suffix": ""
                    },
                    {
                        "first": "Taylor",
                        "middle": [],
                        "last": "Cassidy",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [
                            "William"
                        ],
                        "last": "Evans",
                        "suffix": ""
                    },
                    {
                        "first": "Susan",
                        "middle": [
                            "G"
                        ],
                        "last": "Hill",
                        "suffix": ""
                    },
                    {
                        "first": "Clare",
                        "middle": [],
                        "last": "Voss",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "RO-MAN 2016: IEEE International Symposium on Robot and Human Interactive Communication",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Matthew Marge, Claire Bonial, Brendan Byrne, Taylor Cassidy, A. William Evans, Susan G. Hill, and Clare Voss. 2016. Applying the Wizard-of-Oz technique to multimodal human-robot dialogue. In RO-MAN 2016: IEEE International Symposium on Robot and Human Interactive Communication.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Exploring variation of natural human commands to a robot in a collaborative navigation task",
                "authors": [
                    {
                        "first": "Matthew",
                        "middle": [],
                        "last": "Marge",
                        "suffix": ""
                    },
                    {
                        "first": "Claire",
                        "middle": [],
                        "last": "Bonial",
                        "suffix": ""
                    },
                    {
                        "first": "Ashley",
                        "middle": [],
                        "last": "Foots",
                        "suffix": ""
                    },
                    {
                        "first": "Cory",
                        "middle": [],
                        "last": "Hayes",
                        "suffix": ""
                    },
                    {
                        "first": "Cassidy",
                        "middle": [],
                        "last": "Henry",
                        "suffix": ""
                    },
                    {
                        "first": "Kimberly",
                        "middle": [],
                        "last": "Pollard",
                        "suffix": ""
                    },
                    {
                        "first": "Ron",
                        "middle": [],
                        "last": "Artstein",
                        "suffix": ""
                    },
                    {
                        "first": "Clare",
                        "middle": [],
                        "last": "Voss",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Traum",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the First Workshop on Language Grounding for Robotics",
                "volume": "",
                "issue": "",
                "pages": "58--66",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Matthew Marge, Claire Bonial, Ashley Foots, Cory Hayes, Cassidy Henry, Kimberly Pollard, Ron Art- stein, Clare Voss, and David Traum. 2017. Explor- ing variation of natural human commands to a robot in a collaborative navigation task. In Proceedings of the First Workshop on Language Grounding for Robotics, pages 58-66.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Collaborative dialogue in Minecraft",
                "authors": [
                    {
                        "first": "Anjali",
                        "middle": [],
                        "last": "Narayan-Chen",
                        "suffix": ""
                    },
                    {
                        "first": "Prashant",
                        "middle": [],
                        "last": "Jayannavar",
                        "suffix": ""
                    },
                    {
                        "first": "Julia",
                        "middle": [],
                        "last": "Hockenmaier",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "5405--5415",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/P19-1537"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Anjali Narayan-Chen, Prashant Jayannavar, and Ju- lia Hockenmaier. 2019. Collaborative dialogue in Minecraft. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 5405-5415, Florence, Italy. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "AMR similarity metrics from principles",
                "authors": [
                    {
                        "first": "Juri",
                        "middle": [],
                        "last": "Opitz",
                        "suffix": ""
                    },
                    {
                        "first": "Letitia",
                        "middle": [],
                        "last": "Parcalabescu",
                        "suffix": ""
                    },
                    {
                        "first": "Anette",
                        "middle": [],
                        "last": "Frank",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Transactions of the Association for Computational Linguistics",
                "volume": "8",
                "issue": "",
                "pages": "522--538",
                "other_ids": {
                    "DOI": [
                        "10.1162/tacl_a_00329"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Juri Opitz, Letitia Parcalabescu, and Anette Frank. 2020. AMR similarity metrics from principles. Transactions of the Association for Computational Linguistics, 8:522-538.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Penman Natural Language Group",
                "authors": [],
                "year": 1989,
                "venue": "Information Sciences Institute",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Penman Natural Language Group. 1989. The Penman user guide. Technical report, Information Sciences Institute.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Wizard of Oz Studies in HRI: A Systematic Review and New Reporting Guidelines",
                "authors": [
                    {
                        "first": "Laurel",
                        "middle": [],
                        "last": "Riek",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Journal of Human-Robot Interaction",
                "volume": "",
                "issue": "1",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Laurel Riek. 2012. Wizard of Oz Studies in HRI: A Systematic Review and New Reporting Guidelines. Journal of Human-Robot Interaction, 1(1).",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Semantic Parsing in Spoken Language Understanding using Abstract Meaning Representation",
                "authors": [
                    {
                        "first": "Hongyuan",
                        "middle": [],
                        "last": "Shen",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hongyuan Shen. 2018. Semantic Parsing in Spoken Language Understanding using Abstract Meaning Representation. Ph.D. thesis, Brandeis University.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Sembleu: A robust metric for amr parsing evaluation",
                "authors": [
                    {
                        "first": "Linfeng",
                        "middle": [],
                        "last": "Song",
                        "suffix": ""
                    },
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Gildea",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {
                    "arXiv": [
                        "arXiv:1905.10726"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Linfeng Song and Daniel Gildea. 2019. Sembleu: A robust metric for amr parsing evaluation. arXiv preprint arXiv:1905.10726.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Semantics and pragmatics of questions and answers for dialogue agents",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Traum",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "proceedings of the International Workshop on Computational Semantics",
                "volume": "",
                "issue": "",
                "pages": "380--394",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David Traum. 2003. Semantics and pragmatics of questions and answers for dialogue agents. In pro- ceedings of the International Workshop on Compu- tational Semantics, pages 380-394.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "AMR Parsing as Sequence-to-Graph Transduction",
                "authors": [
                    {
                        "first": "Sheng",
                        "middle": [],
                        "last": "Zhang",
                        "suffix": ""
                    },
                    {
                        "first": "Xutai",
                        "middle": [],
                        "last": "Ma",
                        "suffix": ""
                    },
                    {
                        "first": "Kevin",
                        "middle": [],
                        "last": "Duh",
                        "suffix": ""
                    },
                    {
                        "first": "Benjamin",
                        "middle": [],
                        "last": "Van Durme",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sheng Zhang, Xutai Ma, Kevin Duh, and Benjamin Van Durme. 2019. AMR Parsing as Sequence-to- Graph Transduction. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Florence, Italy. Association for Computational Linguistics.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "uris": null,
                "type_str": "figure",
                "num": null,
                "text": "c2 / commander) :direction (b / back)) :ARG1 (g / go-02 :completable -:ARG0 r :direction (b / back) :time (a / after :op1 (n / now))) :ARG2 (r / robot)) Move back in (a) Standard-AMR (parser output), (b) Dialogue-AMR (conversion system output)."
            },
            "FIGREF1": {
                "uris": null,
                "type_str": "figure",
                "num": null,
                "text": "Standard and Dialogue-AMR comparison for Commander instructing robot Move forward three feet."
            },
            "TABREF0": {
                "html": null,
                "type_str": "table",
                "text": "Retrained AMR parser Smatch results on SCOUT Continuous-trial test set.",
                "num": null,
                "content": "<table><tr><td>Parser</td><td>Training</td><td>P</td><td>R</td><td>F</td></tr><tr><td/><td>AMR 2.0</td><td colspan=\"3\">.47 .77 .58</td></tr><tr><td>Zhang et al.</td><td colspan=\"4\">2.0 + DialAMR .73 .77 .75 AMR 3.0 .52 .80 .63</td></tr><tr><td/><td colspan=\"4\">3.0 + DialAMR .88 .89 .89</td></tr><tr><td/><td>AMR 2.0</td><td colspan=\"3\">.53 .77 .63</td></tr><tr><td>Lindemann</td><td colspan=\"4\">2.0 + DialAMR .92 .94 .93 AMR 3.0 .55 .81 .65</td></tr><tr><td/><td colspan=\"4\">3.0 + DialAMR .91 .95 .93</td></tr></table>"
            },
            "TABREF2": {
                "html": null,
                "type_str": "table",
                "text": "Summary of Smatch scores & Robot-Concept Relation classification accuracy for each variant conversion system, including our G2G system before and after Minecraft domain extension, tested on SCOUT and Minecraft.",
                "num": null,
                "content": "<table/>"
            },
            "TABREF4": {
                "html": null,
                "type_str": "table",
                "text": "Smatch scores for best-performing domainextended (ext.) G2G variants using automatically obtained Standard-AMR input from retrained parser.",
                "num": null,
                "content": "<table/>"
            }
        }
    }
}