File size: 114,290 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
{
    "paper_id": "I17-1021",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T07:38:18.467024Z"
    },
    "title": "Distributional Modeling on a Diet: One-shot Word Learning from Text Only",
    "authors": [
        {
            "first": "Su",
            "middle": [],
            "last": "Wang",
            "suffix": "",
            "affiliation": {},
            "email": ""
        },
        {
            "first": "Stephen",
            "middle": [],
            "last": "Roller",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "The University of Texas at Austin",
                "location": {}
            },
            "email": "roller@cs.utexas.edu"
        },
        {
            "first": "Katrin",
            "middle": [],
            "last": "Erk",
            "suffix": "",
            "affiliation": {},
            "email": "katrin.erk@mail.utexas.edu"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "We test whether distributional models can do one-shot learning of definitional properties from text only. Using Bayesian models, we find that first learning overarching structure in the known data, regularities in textual contexts and in properties, helps one-shot learning, and that individual context items can be highly informative. Our experiments show that our model can learn properties from a single exposure when given an informative utterance.",
    "pdf_parse": {
        "paper_id": "I17-1021",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "We test whether distributional models can do one-shot learning of definitional properties from text only. Using Bayesian models, we find that first learning overarching structure in the known data, regularities in textual contexts and in properties, helps one-shot learning, and that individual context items can be highly informative. Our experiments show that our model can learn properties from a single exposure when given an informative utterance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "When humans encounter an unknown word in text, even with a single instance, they can often infer approximately what it means, as in this example from Lazaridou et al. (2014) :",
                "cite_spans": [
                    {
                        "start": 150,
                        "end": 173,
                        "text": "Lazaridou et al. (2014)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We found a cute, hairy wampimuk sleeping behind the tree.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "People who hear this sentence typically guess that a wampimuk is an animal, or even that it is a mammal. Distributional models, which describe the meaning of a word in terms of its observed contexts (Turney and Pantel, 2010) , have been suggested as a model for how humans learn word meanings (Landauer and Dumais, 1997) . However, distributional models typically need hundreds of instances of a word to derive a highquality representation for it, while humans can often infer a passable meaning approximation from one sentence only (as in the above example). This phenomenon is known as fast mapping (Carey and Bartlett, 1978) , Our primary modeling objective in this paper is to explore a plausible model for fastmapping learning from textual context.",
                "cite_spans": [
                    {
                        "start": 199,
                        "end": 224,
                        "text": "(Turney and Pantel, 2010)",
                        "ref_id": "BIBREF33"
                    },
                    {
                        "start": 293,
                        "end": 320,
                        "text": "(Landauer and Dumais, 1997)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 601,
                        "end": 627,
                        "text": "(Carey and Bartlett, 1978)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "While there is preliminary evidence that fast mapping can be modeled distributionally (Lazaridou et al., 2016) , it is unclear what enables it.",
                "cite_spans": [
                    {
                        "start": 86,
                        "end": 110,
                        "text": "(Lazaridou et al., 2016)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "How do humans infer word meanings from so little data? This question has been studied for grounded word learning, when the learner perceives an object in non-linguistic context that corresponds to the unknown word. The literature emphasizes the importance of learning general knowledge or overarching structure, which we define as the information that is learned by accumulation across concepts (e.g. regularities in property co-occurrence), across all concepts (Kemp et al., 2007) , In grounded word learning, overarching structure that has been proposed includes knowledge about which properties. For example knowledge about which properties are most important to object naming (Smith et al., 2002; Colunga and Smith, 2005) , or a taxonomy of concepts (Xu and Tenenbaum, 2007) .",
                "cite_spans": [
                    {
                        "start": 462,
                        "end": 481,
                        "text": "(Kemp et al., 2007)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 680,
                        "end": 700,
                        "text": "(Smith et al., 2002;",
                        "ref_id": "BIBREF29"
                    },
                    {
                        "start": 701,
                        "end": 725,
                        "text": "Colunga and Smith, 2005)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 754,
                        "end": 778,
                        "text": "(Xu and Tenenbaum, 2007)",
                        "ref_id": "BIBREF35"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In this paper we study models for fast mapping in word learning 1 from textual context alone, using probabilistic distributional models. Our task differs from the grounded case in that we do not perceive any object labeled by the unknown word. In that context, learning word meaning means learning the associated definitional properties and their weights (see Section 3). For the sake of interpretability, we focus on learning definitional properties We ask what kinds of overarching structure in distributional contexts and in properties will be helpful for one-shot word learning.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We focus on learning from syntactic context. Distributional representations of syntactic context are directly interpretable as selectional constraints, which in manually created resources are typically characterized through high-level taxonomy classes (Kipper-Schuler, 2005; Fillmore et al., 2003) . So they should provide good evidence for the meaning of role fillers. Also, it has been shown that selectional constraints can be learned distributionally (Erk et al., 2010; \u00d3 S\u00e9aghdha and Korhonen, 2014; Ritter et al., 2010) . However, our point will not be that syntax is needed for fast word learning, but that it helps to observe overarching structure, with syntactic context providing a clear test bed.",
                "cite_spans": [
                    {
                        "start": 252,
                        "end": 274,
                        "text": "(Kipper-Schuler, 2005;",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 275,
                        "end": 297,
                        "text": "Fillmore et al., 2003)",
                        "ref_id": "BIBREF9"
                    },
                    {
                        "start": 455,
                        "end": 473,
                        "text": "(Erk et al., 2010;",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 474,
                        "end": 504,
                        "text": "\u00d3 S\u00e9aghdha and Korhonen, 2014;",
                        "ref_id": null
                    },
                    {
                        "start": 505,
                        "end": 525,
                        "text": "Ritter et al., 2010)",
                        "ref_id": "BIBREF26"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We test two types of overarching structure for their usefulness in fast mapping. First, we hypothesize that it is helpful to learn about commonalities among context items, which enables mapping from contexts to properties. For example the syntactic contexts eat-dobj and cook-dobj should prefer similar targets: things that are cooked are also things that are eaten (Hypothesis H1).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "The second hypothesis is that it will be useful to learn co-occurrence patterns between properties. That is, we hypothesize that in learning an entity is a mammal, we may also infer it is four-legged (Hypothesis H2).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We do not intent to make strong cognitive claims, for which additional experimentation will be in order, and we leave this for future work. This work sets its goal on building a plausible computational model that models human fast-mapping in learning (i) well from limited grounded data, (ii) effectively from only one instance.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Fast mapping and textual context. Fast mapping (Carey and Bartlett, 1978) is the human ability to construct provisional word meaning representations after one or few exposures. An important reason for why humans can do fast mapping is that they acquire overarching structure that constrains learning (Smith et al., 2002; Colunga and Smith, 2005; Kemp et al., 2007; Xu and Tenenbaum, 2007; Maas and Kemp, 2009) . In this paper, we ask what forms of overarching structure will be useful for text-based word learning. Lazaridou et al. (2014) consider fast mapping for grounded word learning, mapping image data to distributional representations, which is in a way the mirror image of our task. Lazaridou et al. (2016) were the first to explore fast mapping for text-based word learning, using an extension to word2vec with both textual and visual features. However, they model the unknown word simply by averaging the vectors of known words in the sentence, and do not explore what types of knowl-edge enable fast mapping.",
                "cite_spans": [
                    {
                        "start": 47,
                        "end": 73,
                        "text": "(Carey and Bartlett, 1978)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 300,
                        "end": 320,
                        "text": "(Smith et al., 2002;",
                        "ref_id": "BIBREF29"
                    },
                    {
                        "start": 321,
                        "end": 345,
                        "text": "Colunga and Smith, 2005;",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 346,
                        "end": 364,
                        "text": "Kemp et al., 2007;",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 365,
                        "end": 388,
                        "text": "Xu and Tenenbaum, 2007;",
                        "ref_id": "BIBREF35"
                    },
                    {
                        "start": 389,
                        "end": 409,
                        "text": "Maas and Kemp, 2009)",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 515,
                        "end": 538,
                        "text": "Lazaridou et al. (2014)",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 691,
                        "end": 714,
                        "text": "Lazaridou et al. (2016)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Background",
                "sec_num": "2"
            },
            {
                "text": "Definitional properties. Feature norms are definitional properties collected from human participants. Feature norm datasets are available from McRae et al. (2005) and Vigliocco et al. (2004) . In this paper we use feature norms as our target representations of word meaning. There are several recent approaches that learn to map distributional representations to feature norms (Johns and Jones, 2012; Rubinstein et al., 2015; F\u0203g\u0203r\u0203\u015fan et al., 2015; Herbelot and Vecchi, 2015a) . We also map distributional information to feature norms, but we do it based on a single textual instance (one-shot learning).",
                "cite_spans": [
                    {
                        "start": 143,
                        "end": 162,
                        "text": "McRae et al. (2005)",
                        "ref_id": "BIBREF23"
                    },
                    {
                        "start": 167,
                        "end": 190,
                        "text": "Vigliocco et al. (2004)",
                        "ref_id": "BIBREF34"
                    },
                    {
                        "start": 377,
                        "end": 400,
                        "text": "(Johns and Jones, 2012;",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 401,
                        "end": 425,
                        "text": "Rubinstein et al., 2015;",
                        "ref_id": "BIBREF28"
                    },
                    {
                        "start": 426,
                        "end": 449,
                        "text": "F\u0203g\u0203r\u0203\u015fan et al., 2015;",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 450,
                        "end": 477,
                        "text": "Herbelot and Vecchi, 2015a)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Background",
                "sec_num": "2"
            },
            {
                "text": "In the current paper we use the Quantified McRae (QMR) dataset (Herbelot and Vecchi, 2015b) , which extends the McRae et al. (2005) feature norms by ratings on the proportion of category members that have a property, and the Animal dataset (Herbelot, 2013) , which is smaller but has the same shape. For example, most alligators are dangerous. The quantifiers are given probabilistic interpretations, so if most alligators are dangerous, the probability for a random alligator to be dangerous would be 0.95. This makes this dataset a good fit for our probabilistic distributional model. We discuss QMR and the Animal data further in Section 4.",
                "cite_spans": [
                    {
                        "start": 63,
                        "end": 91,
                        "text": "(Herbelot and Vecchi, 2015b)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 112,
                        "end": 131,
                        "text": "McRae et al. (2005)",
                        "ref_id": "BIBREF23"
                    },
                    {
                        "start": 240,
                        "end": 256,
                        "text": "(Herbelot, 2013)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Background",
                "sec_num": "2"
            },
            {
                "text": "Bayesian models in lexical semantics. We use Bayesian models for the sake of interpretability and because the existing definitional property datasets are small. The Bayesian models in lexical semantics that are most related to our approach are Dinu and Lapata (2010) , who represent word meanings as distributions over latent topics that approximate senses, and Andrews et al. (2009) and Roller and Schulte im Walde (2013), who use multi-modal extensions of Latent Dirichlet Allocation (LDA) models (Blei et al., 2003) to represent co-occurrences of textual context and definitional features.\u00d3 S\u00e9aghdha (2010) and Ritter et al. (2010) use Bayesian approaches to model selectional preferences.",
                "cite_spans": [
                    {
                        "start": 244,
                        "end": 266,
                        "text": "Dinu and Lapata (2010)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 362,
                        "end": 383,
                        "text": "Andrews et al. (2009)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 499,
                        "end": 518,
                        "text": "(Blei et al., 2003)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 614,
                        "end": 634,
                        "text": "Ritter et al. (2010)",
                        "ref_id": "BIBREF26"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Background",
                "sec_num": "2"
            },
            {
                "text": "In this section we develop a series of models to test our hypothesis that acquiring general knowledge is helpful to word learning, in particular knowledge about similarities between context items (H1) and co-occurrences between properties (H2). The count-based model will implement neither hypoth-esis, while the bimodal topic model will implement both. To test the hypotheses separately, we employ two clustering approaches via Bernoulli Mixtures, which we use as extensions to the countbased model and bimodal topic model.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Models",
                "sec_num": "3"
            },
            {
                "text": "Independent Bernoulli condition. Let Q be a set of definitional properties, C a set of concepts that the learner knows about, and V a vocabulary of context items. For most of our models, context items w \u2208 V will be predicate-role pairs such as eat-dobj. The task is determine properties that apply to an unknown concept u \u2208 C. Any concept c \u2208 C is associated with a vector c Ind (where \"Ind\" stands for \"independent Bernoulli probabilities\") of |Q| probabilities, where the i-th entry of c Ind is the probability that an instance of concept c would have property q i . These probabilities are independent Bernoulli probabilities. For instance, alligator Ind would have an entry of 0.95 for dangerous. An instance c \u2208 {0, 1} |Q| of a concept c \u2208 C is a vector of zeros and ones drawn from c Ind , where an entry of 1 at position i means that this instance has the property q i .",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Count-based Model",
                "sec_num": "3.1"
            },
            {
                "text": "The model proceeds in two steps. First it learns property probabilities for context items w \u2208 V . The model observes instances c occurring textually with context item w, and learns property probabilities for w, where the probability that w has for a property q indicates the probability that w would appear as a context item with an instance that has property q. In the second step the model uses the acquired context item representations to learn property probabilities for an unknown concept u. When u appears with w, the context item w \"imagines\" an instance (samples it from its property probabilities), and uses this instance to update the property probabilities of u. Instead of making point estimates, the model represents its uncertainty about the probability of a property through a Beta distribution, a distribution over Bernoulli probabilities. As a Beta distribution is characterized by two parameters \u03b1 and \u03b2, we associate each context item w \u2208 V with vectors w \u03b1 \u2208 R |Q| and w \u03b2 \u2208 R |Q| , where the i-th \u03b1 and \u03b2 values are the parameters of the Beta distribution for property q i . When an instance c is observed with context item w, we do a Bayesian update on w simply as",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Count-based Model",
                "sec_num": "3.1"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "w \u03b1 = w \u03b1 + c w \u03b2 = w \u03b2 + (1 \u2212 c)",
                        "eq_num": "(1)"
                    }
                ],
                "section": "The Count-based Model",
                "sec_num": "3.1"
            },
            {
                "text": "because the Beta distribution is the conjugate prior of the Bernoulli. To draw an instance from w, we draw it from the predictive posterior probabilities of its Beta distributions, w Ind = w \u03b1 /(w \u03b1 + w \u03b2 ). Likewise, we associate an unknown concept u with vectors u \u03b1 and u \u03b2 . When the model observes u in the context of w, it draws an instance from w Ind , and performs a Bayesian update as in (1) on the vectors associated with u. After training, the property probabilities for u are again the posterior predictive probabilities u Ind = u \u03b1 /(u \u03b1 +u \u03b2 ). The model can be used for multi-shot learning and oneshot learning in the same way.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Count-based Model",
                "sec_num": "3.1"
            },
            {
                "text": "Multinomial condition. We also test a multinomial variant of the count-based model, for greater comparability with the LDA model below. Here, the concept representation c Mult is a multinomial distribution over the properties in Q. (That is, all the properties compete in this model.) An instance of concept c is now a single property, drawn from c's multinomial. The representation of a context item w, and also the representation of the unknown concept u, is a Dirichlet distribution with |Q| parameters. Bayesian update of the representation of w based on an occurrence with c, and likewise Bayesian update of the representation of u based on an occurrence with w, is straightforward again, as the Dirichlet distribution is the conjugate prior of the multinomial.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Count-based Model",
                "sec_num": "3.1"
            },
            {
                "text": "The two count-based models do not implement either of our two hypotheses. They compute separate selectional constraints for each context item, and do not attend to co-occurrences between properties. In the experiments below, the count-based models will be listed as Count Independent and Count Multinomial.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Count-based Model",
                "sec_num": "3.1"
            },
            {
                "text": "We use an extension of LDA (Blei et al., 2003) to implement our hypotheses on the usefulness of overarching structure, both commonalities in selectional constraints across predicates, and cooccurrence of properties across concepts. In particular, we build on Andrews et al. (2009) in using a bimodal topic model, in which a single topic simultaneously generates both a context item and a property. We further build on Dinu and Lapata (2010) in having a \"pseudo-document\" for each concept c to represent its observed occurrences. In our case, this pseudo-document contains pairs of a context item w \u2208 V and a property q meaning that w has been observed to occur with an instance of c that had q. The generative story is as follows. For each known concept c, draw a multinomial \u03b8 c over topics. For each topic z, draw a multinomial \u03c6 z over context items w \u2208 V , and a multinomial \u03c8 z over properties q \u2208 Q. To generate an entry for c's pseudo-document, draw a topic z \u223c M ult(\u03b8 c ). Then, from z, simultaneously draw a context item from \u03c6 z and a property from \u03c8 z . Figure 1 shows the plate diagram for this model.",
                "cite_spans": [
                    {
                        "start": 27,
                        "end": 46,
                        "text": "(Blei et al., 2003)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 259,
                        "end": 280,
                        "text": "Andrews et al. (2009)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 418,
                        "end": 440,
                        "text": "Dinu and Lapata (2010)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 1066,
                        "end": 1074,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "The Bimodal Topic Model",
                "sec_num": "3.2"
            },
            {
                "text": "\u2208 Q, \u03b1 \u03b8 c z w q \u03c6 z \u03c8 z \u03b2 \u03b3 (w, q) D z z",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Bimodal Topic Model",
                "sec_num": "3.2"
            },
            {
                "text": "To infer properties for an unknown concept u, we create a pseudo-document for u containing just the observed context items, no properties, as those are not observed. From this pseudo-document d u we infer the topic distribution \u03b8 u . Then the probability of a property q given d u is",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Bimodal Topic Model",
                "sec_num": "3.2"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "P (q|d u ) = z P (z|\u03b8 u )P (q|\u03c8 z )",
                        "eq_num": "(2)"
                    }
                ],
                "section": "The Bimodal Topic Model",
                "sec_num": "3.2"
            },
            {
                "text": "For the one-shot condition, where we only observe a single context item w with u, this simplifies to",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Bimodal Topic Model",
                "sec_num": "3.2"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "P (q|w) = z P (z|w)P (q|\u03c8 z )",
                        "eq_num": "(3)"
                    }
                ],
                "section": "The Bimodal Topic Model",
                "sec_num": "3.2"
            },
            {
                "text": "We refer to this model as bi-TM below. The topics of this model implement our hypothesis H1 by grouping context items that tend to occur with the same concepts and the same properties. The topics also implement our hypothesis H2 by grouping properties that tend to occur with the same concepts and the same context items. By using multinomials \u03c8 z it makes the simplifying assumption that all properties compete, like the Count Multinomial model above.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The Bimodal Topic Model",
                "sec_num": "3.2"
            },
            {
                "text": "With the Count models, we investigate word learning without any overarching structures. With the bi-TMs, we investigate word learning with both types of overarching structures at once. In order to evaluate each of the two hypotheses separately, we use clustering with Bernoulli Mixture models of either the context items or the properties. A Bernoulli Mixture model (Juan and Vidal, 2004) assumes that a population of m-dimensional binary vectors x has been generated by a set of mixture components K, each of which is a vector of m Bernoulli probabilities:",
                "cite_spans": [
                    {
                        "start": 366,
                        "end": 388,
                        "text": "(Juan and Vidal, 2004)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bernoulli Mixtures",
                "sec_num": "3.3"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "p(x) = |K| k=1 p(k)p(x|k)",
                        "eq_num": "(4)"
                    }
                ],
                "section": "Bernoulli Mixtures",
                "sec_num": "3.3"
            },
            {
                "text": "A Bernoulli Mixture can represent co-occurrence patterns between the m random variables it models without assuming competition between them.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bernoulli Mixtures",
                "sec_num": "3.3"
            },
            {
                "text": "To test the effect of modeling cross-predicate selectional constraints, we estimate a Bernoulli Mixture model from n instances w for each w \u2208 V , sampled from w Ind (which is learned as in the Count Independent model). Given a Bernoulli Mixture model of |K| components, we then assign each context item w to its closest mixture component as follows. Say the instances of w used to estimate the Bernoulli Mixture were {w 1 , . . . , w n }, then we assign w to the component",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bernoulli Mixtures",
                "sec_num": "3.3"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "k w = argmax k n j=1 p(k|w j )",
                        "eq_num": "(5)"
                    }
                ],
                "section": "Bernoulli Mixtures",
                "sec_num": "3.3"
            },
            {
                "text": "We then re-train the representations of context items in the Count Multinomial condition, treating each occurrence of c with context w as an occurrence of c with k w . This yields a Count Multinomial model called Count BernMix H1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bernoulli Mixtures",
                "sec_num": "3.3"
            },
            {
                "text": "To test the effect of modeling property co-occurrences, we estimate a |K|-component Bernoulli Mixture model from n instances of each known concept c \u2208 C, sampled from c Ind . We then represent each concept c by a vector c Mult , a multinomial with |K| parameters, as follows. Say the instances of c used to estimate the Bernoulli Mixture were {c 1 , . . . , c n }, then the k-th entry in c Mult is the average probability, over all c i , of being generated by component k:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bernoulli Mixtures",
                "sec_num": "3.3"
            },
            {
                "text": "EQUATION",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [
                    {
                        "start": 0,
                        "end": 8,
                        "text": "EQUATION",
                        "ref_id": "EQREF",
                        "raw_str": "c k = 1 n n j=1 p(k|c j )",
                        "eq_num": "(6)"
                    }
                ],
                "section": "Bernoulli Mixtures",
                "sec_num": "3.3"
            },
            {
                "text": "This can be used as a Count Multinomial model where the entries in c Mult stand for Bernoulli Mixture components rather than individual properties. We refer to it as Count BernMix H2. 2 Finally, we extend the bi-TM with the H2 Bernoulli Mixture in the same way as a Count Multinomial model, and list this extension as bi-TM BernMix H2. While the bi-TM already implements both H1 and H2, its assumption of competition between all properties is simplistic, and bi-TM BernMix H2 tests whether lifting this assumption will yield a better model. We do not extend the bi-TM with the H1 Bernoulli Mixture, as the assumption of competition between context items that the bi-TM makes is appropriate.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Bernoulli Mixtures",
                "sec_num": "3.3"
            },
            {
                "text": "Definitional properties. As we use probabilistic models, we need probabilities of properties applying to concept instances. So the QMR dataset (Herbelot and Vecchi, 2015b) is ideally suited. QMR has 532 concrete noun concepts, each associated with a set of quantified properties. The quantifiers have been given probabilistic interpretations, mapping all\u21921, most\u21920.95, some\u21920.35, few\u21920.05, none\u21920. 3 Each concept/property pair was judged by 3 raters. We choose the majority rating when it exists, and otherwise the minimum proposed rating. To address sparseness, especially for the one-shot learning setting, we omit properties that are named for fewer than 5 concepts. This leaves us with 503 concepts and 220 properties We intentionally choose this small dataset: One of our main objectives is to explore the possibility of learning effectively from very limited training data. In addition, while the feature norm dataset is small, our distributional dataset (the BNC, see below) is not. The latter essentially serves as a pivot for us to propagate the knowledge from the feature norm data to the wider semantic space.",
                "cite_spans": [
                    {
                        "start": 143,
                        "end": 171,
                        "text": "(Herbelot and Vecchi, 2015b)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data and Experimental Setup",
                "sec_num": "4"
            },
            {
                "text": "It is a problem of both the original McRae et al. (2005) data and QMR that if a property is not named by participants, it is not listed, even if it applies. For example, the property four-legged 2 We use the H2 Bernoulli Mixture as a soft clustering because it is straightforward to do this through concept representations. For the H1 mixture, we did not see an obvious soft clustering, so we use it as a hard clustering. 3 The dataset also contains KIND properties that do not have probabilistic interpretations. Following Herbelot and Vecchi (2015a) we omit these properties.",
                "cite_spans": [
                    {
                        "start": 37,
                        "end": 56,
                        "text": "McRae et al. (2005)",
                        "ref_id": "BIBREF23"
                    },
                    {
                        "start": 195,
                        "end": 196,
                        "text": "2",
                        "ref_id": null
                    },
                    {
                        "start": 422,
                        "end": 423,
                        "text": "3",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data and Experimental Setup",
                "sec_num": "4"
            },
            {
                "text": "is missing for alligator in QMR. So we additionally use the Animal dataset of Herbelot (2013) , where every property has a rating for every concept. The dataset comprises 72 animal concepts with quantification information for 54 properties.",
                "cite_spans": [
                    {
                        "start": 78,
                        "end": 93,
                        "text": "Herbelot (2013)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data and Experimental Setup",
                "sec_num": "4"
            },
            {
                "text": "Distributional data. We use the British National Corpus (BNC) (The BNC Consortium, 2007) , with dependency parses from Spacy. 4 As context items, we use pairs pred, dep of predicates pred that are content words (nouns, verbs, adjectives, adverbs) but not stopwords, where a concept from the respective dataset (QMR, Animal) is a dependency child of pred via dep. In total we obtain a vocabulary of 500 QMR concepts and 72 Animal concepts that appear in the BNC, and 29,124 context items. We refer to this syntactic context as Syn. For comparison, we also use a baseline model with a bag-of-words (BOW) context window of 2 or 5 words, with stopwords removed.",
                "cite_spans": [
                    {
                        "start": 62,
                        "end": 88,
                        "text": "(The BNC Consortium, 2007)",
                        "ref_id": "BIBREF31"
                    },
                    {
                        "start": 126,
                        "end": 127,
                        "text": "4",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data and Experimental Setup",
                "sec_num": "4"
            },
            {
                "text": "Models. We test our probabilistic models as defined in the previous section. While our focus is on one-shot learning, we also evaluate a multishot setting where we learn from the whole BNC, as a sanity check on our models. (We do not test our models in an incremental learning setting that adds one occurrence at a time. While this is possible in principle, the computational cost is prohibitive for the bi-TM.) We compare to the Partial Least Squares (PLS) model of Herbelot and Vecchi (2015a) 5 to see whether our models perform at state of the art levels. We also compare to a baseline that always predicts the probability of a property to be its relative frequency in the set C of known concepts (Baseline).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data and Experimental Setup",
                "sec_num": "4"
            },
            {
                "text": "We can directly use the property probabilities in QMR and the Animal data as concept representations c Ind for the Count Independent model. For the Count Multinomial model, we never explicitly compute c Mult . To sample from it, we first sample an instance c \u2208 {0, 1} |Q| from the independent Bernoulli vector of c, c Ind . From the properties that apply to c, we sample one (with equal probabilities) as the observed property. All priors for the count-based models (Beta priors or Dirichlet priors, respectively) are set to 1.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data and Experimental Setup",
                "sec_num": "4"
            },
            {
                "text": "For the bi-TM, a pseudo-document for a known concept c is generated as follows: Given an occurrence of known concept c with context item w in the BNC, we sample a property q from c (in the same way as for the Count Multinomial model), and add w, q to the pseudo-document for c. For training the bi-TM, we use collapsed Gibbs sampling (Steyvers and Griffiths, 2007) with 500 iterations for burn-in. The Dirichlet priors are uniformly set to 0.1 following Roller and Schulte im Walde (2013). We use 50 topics throughout.",
                "cite_spans": [
                    {
                        "start": 334,
                        "end": 364,
                        "text": "(Steyvers and Griffiths, 2007)",
                        "ref_id": "BIBREF30"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data and Experimental Setup",
                "sec_num": "4"
            },
            {
                "text": "For all our models, we report the average performance from 5 runs. For the PLS benchmark, we use 50 components with otherwise default settings, following Herbelot and Vecchi (2015a) .",
                "cite_spans": [
                    {
                        "start": 154,
                        "end": 181,
                        "text": "Herbelot and Vecchi (2015a)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data and Experimental Setup",
                "sec_num": "4"
            },
            {
                "text": "Evaluation. We test all models using 5-fold cross validation and report average performance across the 5 folds. We evaluate performance using Mean Average Precision (MAP) , which tests to what extent a model ranks definitional properties in the same order as the gold data. Assume a system that predicts a ranking of n datapoints, where 1 is the highest-ranked, and assume that each datapoint i has a gold rating of I(i) \u2208 {0, 1}. This system obtains an Average Precision (AP) of",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data and Experimental Setup",
                "sec_num": "4"
            },
            {
                "text": "AP = 1 n i=1 I(i) n i=1 Prec i \u2022 I(i)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data and Experimental Setup",
                "sec_num": "4"
            },
            {
                "text": "where Prec i is precision at a cutoff of i. Mean Average Precision is the mean over multiple AP values. In our case, n = |Q|, and we compare a model-predicted ranking of property probabilities with a binary gold rating of whether the property applies to any instances of the given concept. For the one-shot evaluation, we make a separate prediction for each occurrence of an unknown concept u in the BNC, and report MAP by averaging over the AP values for all occurrences of u.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data and Experimental Setup",
                "sec_num": "4"
            },
            {
                "text": "Multi-shot learning. While our focus in this paper is on one-shot learning, we first test all models in a multi-shot setting. The aim is to see how well they perform when given ample amounts of training data, and to be able to compare their performance to an existing multi-shot model (as we will not have any related work to compare to for the one-shot setting.) The results are shown in Table 1 , where Syn shows results that use syntactic context (encoding selectional constraints) and BOW5 is a bag-of-words context with a window size of 5. We only compare our models to the baseline and benchmark for now, and do an indepth comparison of our models when we get to the one-shot task, which is our main focus. Across all models, the syntactic context outperforms the bag-of-words context. We also tested a bag-of-words context with window size 2 and found it to have a performance halfway between Syn and BOW5 throughout. This confirms our assumption that it is reasonable to focus on syntactic context, and for the rest of this paper, we test models with syntactic context only.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 389,
                        "end": 396,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "5"
            },
            {
                "text": "Focusing on Syn conditions now, we see that almost all models outperform the property frequency baseline, though the MAP scores for the baseline do not fall far behind those of the weakest count-based models. 6 The best of our models perform on par with the PLS benchmark of Herbelot and Vecchi (2015a) on QMR, and on the Animal dataset they outperform the benchmark. Comparing the two datasets, we see that all models show better performance on the cleaner (and smaller) Animal dataset than on QMR. This is probably because QMR suffers from many false negatives (properties that apply but were not mentioned), while Animal does not. The Count Independent model shows similar performance here and throughout all later experiments to the Count Multinomial (even though it matches the construction of the QMR and Animal datasets better), so to avoid clutter we do not report on it further below.",
                "cite_spans": [
                    {
                        "start": 209,
                        "end": 210,
                        "text": "6",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "5"
            },
            {
                "text": "One-shot learning. Table 2 : MAP scores, one-shot learning on the QMR and Animal datasets mance of our models on the one-shot learning task. We cannot evaluate the benchmark PLS as it is not suitable for one-shot learning. The baseline is the same as in Table 1 . The numbers shown are Average Precision (AP) values for learning from a single occurrence. Column all averages over all occurrences of a target in the BNC (using only context items that appeared at least 5 times in the BNC), and column oracle top-20 averages over the 20 context items that have the highest AP for the given target. As can be seen, AP varies widely across sentences: When we average over all occurrences of a target in the BNC, performance is close to baseline level. 7 But the most informative instances yield excellent information about an unknown concept, and lead to MAP values that are much higher than those achieved in multi-shot learning (Table 1) . We explore this more below. Comparing our models, we see that the bi-TM does much better throughout than any of the countbased models. Since the bi-TM model implements both cross-predicate selectional constraints (H1) and property co-occurrence (H2), we find both of our hypotheses confirmed by these results. The Bernoulli mixtures improved performance on the Animal dataset, with no clear pattern of which one improved performance more. On QMR, adding a Bernoulli mixture model harms performance across both the count-based and bi-TM models. We suspect that this is because of the false negative entries in QMR; an inspection of Bernoulli mixture H2 components supports this intuition, as the QMR ones were found to be of poorer quality than those for the Animal data.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 19,
                        "end": 26,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    },
                    {
                        "start": 254,
                        "end": 261,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    },
                    {
                        "start": 926,
                        "end": 935,
                        "text": "(Table 1)",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "5"
            },
            {
                "text": "Comparing Tables 1 and 2 we see that they show 7 Context items with few occurrences in the corpus perform considerably worse than baseline, as their property distributions are dominated by the small number of concepts with which they appear. Table 4 : QMR one-shot: AP for top and bottom 5 context items of gown the same patterns of performance: Models that do better on the multi-shot task also do better on the one-shot task. This is encouraging in that it suggests that it should be possible to build incremental models that do well both in a low-data and an abundant-data setting. Table 3 looks in more detail at what it is that the models are learning by showing the five highestprobability properties they are predicting for the concept gown. The top two entries are multishot models, the third shows the one-shot result from the context item with the highest AP. The bi-TM results are very good in both the multi-shot and the one-shot setting, giving high probability to some quite specific properties like has sleeves. The count-based model shows a clear frequency bias in erroneously giving high probabilities to the two overall most frequent properties, made of metal and an animal. This is due to the additive nature of the Count model: In updating unknown concepts from context items, frequent properties are more likely to be sampled, and their effect accumulates as the model does not take into account interactions among context items. The bi-TM, which models these interactions, is much more robust to the effect of property frequency.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 10,
                        "end": 24,
                        "text": "Tables 1 and 2",
                        "ref_id": "TABREF1"
                    },
                    {
                        "start": 242,
                        "end": 249,
                        "text": "Table 4",
                        "ref_id": "TABREF5"
                    },
                    {
                        "start": 585,
                        "end": 592,
                        "text": "Table 3",
                        "ref_id": "TABREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Results and Discussion",
                "sec_num": "5"
            },
            {
                "text": "Informativity. In Table 2 we saw that one-shot performance averaged over all context items in the whole corpus was quite bad, but that good, informative context items can yield high-quality property information. ther. For the concept gown, it shows the five context items that yielded the highest AP values, at the top undo-obj, with an AP as high as 0.7. This raises the question of whether we can predict the informativity of a context item. 8 We test three measures of informativity. The first is simply the frequency of the context item, with the rationale that more frequent context items should have more stable representations. Our second measure is based on entropy. For each context item w, we compute a distribution over properties as in the count-independent model, and measure the entropy of this distribution. If the distribution has few properties account for a majority of the probability mass, then w will have a low entropy, and would be expected to be more informative. Our third measure is based on the same intuition, that items with more \"concentrated\" selectional constraints should be more informative. If a context item w has been observed to occur with known concepts c 1 , . . . , c n , then this measure is the average cosine (AvgCos) of the property distributions (viewed as vectors) of any pair of c i , c j \u2208 {c 1 , . . . , c n }.",
                "cite_spans": [
                    {
                        "start": 444,
                        "end": 445,
                        "text": "8",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 18,
                        "end": 25,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Count",
                "sec_num": null
            },
            {
                "text": "We evaluate the three informativity measures using Spearman's rho to determine the correlation of the informativity of a context item with the AP it produces for each unknown concept. We expect frequency and AvgCos to be positively correlated with AP, and entropy to be negatively correlated with AP. The result is shown in Table 5 . Again, all measures work better on the Animal data than on QMR, where they at best approach significance. The correlation is much better on the bi-TM models than on the count-based models, which is probably due to their higher-quality predictions. Overall, AvgCos emerges as the most robust indicator 8 Lazaridou et al. (2016) , who use a bag-of-words context in one-shot experiments, propose an informativity measure based on the number of contexst that constitute properties. we cannot do that with our syntactic context. Table 6 : QMR, bi-TM, one-shot: MAP by property type over (oracle) top 20 context items for informativity. 9 We now test AvgCos, as our best informativity measure, on its ability to select good context items. The last column of Table 2 shows MAP results for the top 20 context items based on their AvgCos values. The results are much below the oracle MAP (unsurprisingly, given the correlations in Table 5 ), but for QMR they are at the level of the multi-shot results of Table 1, showing that it is possible to some extent to automatically choose informative examples for one-shot learning. Properties by type. McRae et al. (2005) classify properties based on the brain region taxonomy of Cree and McRae (2003) . This enables us to test what types of properties are learned most easily in our fast-mapping setup by computing average AP separately by property type. To combat sparseness, we group property types into five groups, function (the function or use of an entity), taxonomic, visual, encyclopaedic, and other perceptual (e.g., sound). Intuitively, we would expect our contexts to best reflect taxonomic and function properties: Predicates that apply to noun target concepts often express functions of those targets, and manually specified selectional constraints are often characterized in terms of taxonomic classes. Table 6 confirms this intuition. Taxonomic properties achieve the highest MAP by a large margin, followed by functional properties. Visual properties score the lowest.",
                "cite_spans": [
                    {
                        "start": 635,
                        "end": 660,
                        "text": "8 Lazaridou et al. (2016)",
                        "ref_id": null
                    },
                    {
                        "start": 965,
                        "end": 966,
                        "text": "9",
                        "ref_id": null
                    },
                    {
                        "start": 1470,
                        "end": 1489,
                        "text": "McRae et al. (2005)",
                        "ref_id": "BIBREF23"
                    },
                    {
                        "start": 1548,
                        "end": 1569,
                        "text": "Cree and McRae (2003)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 324,
                        "end": 331,
                        "text": "Table 5",
                        "ref_id": "TABREF6"
                    },
                    {
                        "start": 858,
                        "end": 865,
                        "text": "Table 6",
                        "ref_id": null
                    },
                    {
                        "start": 1086,
                        "end": 1093,
                        "text": "Table 2",
                        "ref_id": "TABREF2"
                    },
                    {
                        "start": 1256,
                        "end": 1263,
                        "text": "Table 5",
                        "ref_id": "TABREF6"
                    },
                    {
                        "start": 2186,
                        "end": 2193,
                        "text": "Table 6",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Count",
                "sec_num": null
            },
            {
                "text": "We have developed several models for one-shot learning word meanings from single textual contexts. Our models were designed learn word properties using distributional contexts (H1) or about co-occurrences of properties (H2). We find evidence that both kinds of general knowledge are helpful, especially when combined (in the bi-TM), or when used on clean property data (in the Animal dataset). We further saw that some contexts are highly informative, and preliminary expirements in informativity measures found that average pairwise similarity of seen role fillers (Avg-Cos) achieves some success in predicting which contexts are most useful.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "In the future, we hope to test with other types of general knowledge, including a taxonomy of known concepts (Xu and Tenenbaum, 2007) ; wider-coverage property data (Baroni and Lenci, 2010, Type-DM) ; and alternative modalities (Lazaridou et al., 2016 , image features as \"properties\"). We expect our model will scale to these larger problems easily.",
                "cite_spans": [
                    {
                        "start": 109,
                        "end": 133,
                        "text": "(Xu and Tenenbaum, 2007)",
                        "ref_id": "BIBREF35"
                    },
                    {
                        "start": 165,
                        "end": 198,
                        "text": "(Baroni and Lenci, 2010, Type-DM)",
                        "ref_id": null
                    },
                    {
                        "start": 228,
                        "end": 251,
                        "text": "(Lazaridou et al., 2016",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "We would also like to explore better informativity measures and improvements for AvgCos. Knowledge about informative examples can be useful in human-in-the-loop settings, for example a user aiming to illustrate classes in an ontology with a few typical corpus examples. We also note that the bi-TM cannot be used in for truly incremental learning, as the cost of global re-computation after each seen example is prohibitive. We would like to explore probabilistic models that support incremental word learning, which would be interesting to integrate with an overall probabilistic model of semantics (Goodman and Lassiter, 2014).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "In this paper, we interchangeably use the terms unknown word and unknown concept, as we learn properties, and properties belong to concepts rather than words, and we learn them from text, where we observe words rather than concepts.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "https://spacy.io5 Herbelot and Vecchi (2015a) is the only directly relevant previous work on the subject. Further, to the best of our knowledge, for one-shot property learning from text (only), our work has been the first attempt.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "This is because MAP gives equal credit for all properties correctly predicted as non-zero. When we evaluate with Generalized Average Precision (GAP)(Kishida, 2005), which takes gold weights into account, the baseline model is roughly 10 points below other models. This indicates our models learn approximate property distributions. We omit GAP scores because they correlate strongly with MAP for non-baseline models.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "We also tested a binned variant of the frequency measure, on the intuition that medium-frequency context items should be more informative than either highly frequent or rare ones. However, this measure did not show better performance than the non-binned frequency measure.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "This research was supported by the DARPA DEFT program under AFRL grant FA8750-13-2-0026 and by the NSF CAREER grant IIS 0845925. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author and do not necessarily reflect the view of DARPA, DoD or the US government. We acknowledge the Texas Advanced Computing Center for providing grid resources that contributed to these results.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Integrating experiential and distributional data to learn semantic representations",
                "authors": [
                    {
                        "first": "Mark",
                        "middle": [],
                        "last": "Andrews",
                        "suffix": ""
                    },
                    {
                        "first": "Gabriella",
                        "middle": [],
                        "last": "Vigliocco",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Vinson",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Psychological Review",
                "volume": "116",
                "issue": "3",
                "pages": "463--498",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mark Andrews, Gabriella Vigliocco, and David Vin- son. 2009. Integrating experiential and distribu- tional data to learn semantic representations. Psy- chological Review, 116(3):463-498.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Distributional memory: a general framework for corpus-based semantics",
                "authors": [
                    {
                        "first": "Marco",
                        "middle": [],
                        "last": "Baroni",
                        "suffix": ""
                    },
                    {
                        "first": "Alexandero",
                        "middle": [],
                        "last": "Lenci",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Computational Linguistics",
                "volume": "36",
                "issue": "4",
                "pages": "673--721",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Marco Baroni and Alexandero Lenci. 2010. Dis- tributional memory: a general framework for corpus-based semantics. Computational Linguis- tics, 36(4):673-721.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Latent Dirichlet Allocation",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Blei",
                        "suffix": ""
                    },
                    {
                        "first": "Andrew",
                        "middle": [],
                        "last": "Ng",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Jordan",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Journal of Machine Learning Research",
                "volume": "3",
                "issue": "4-5",
                "pages": "993--1022",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David Blei, Andrew Ng, and Michael Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3(4-5):993-1022.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Acquiring a single new word",
                "authors": [
                    {
                        "first": "Susan",
                        "middle": [],
                        "last": "Carey",
                        "suffix": ""
                    },
                    {
                        "first": "Elsa",
                        "middle": [],
                        "last": "Bartlett",
                        "suffix": ""
                    }
                ],
                "year": 1978,
                "venue": "Papers and Reports on Child Language Development",
                "volume": "15",
                "issue": "",
                "pages": "17--29",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Susan Carey and Elsa Bartlett. 1978. Acquiring a sin- gle new word. Papers and Reports on Child Lan- guage Development, 15:17-29.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "From the lexicon to expectations about kinds: A role for associative learning",
                "authors": [
                    {
                        "first": "Eliana",
                        "middle": [],
                        "last": "Colunga",
                        "suffix": ""
                    },
                    {
                        "first": "Linda",
                        "middle": [
                            "B"
                        ],
                        "last": "Smith",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Psychological Review",
                "volume": "112",
                "issue": "2",
                "pages": "347--382",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Eliana Colunga and Linda B. Smith. 2005. From the lexicon to expectations about kinds: A role for asso- ciative learning. Psychological Review, 112(2):347- 382.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Analyzing the factors underlying the structure and computation of the meaning of chipmunk, cherry, chisel, cheese, and cello (and many other such concrete nouns)",
                "authors": [
                    {
                        "first": "George",
                        "middle": [
                            "S"
                        ],
                        "last": "Cree",
                        "suffix": ""
                    },
                    {
                        "first": "Ken",
                        "middle": [],
                        "last": "Mcrae",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Journal of Experimental Psychology: General",
                "volume": "132",
                "issue": "",
                "pages": "163--201",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "George S. Cree and Ken McRae. 2003. Analyzing the factors underlying the structure and computation of the meaning of chipmunk, cherry, chisel, cheese, and cello (and many other such concrete nouns). Jour- nal of Experimental Psychology: General, 132:163- 201.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Measuring distributional similarity in context",
                "authors": [
                    {
                        "first": "Georgiana",
                        "middle": [],
                        "last": "Dinu",
                        "suffix": ""
                    },
                    {
                        "first": "Mirella",
                        "middle": [],
                        "last": "Lapata",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of EMNLP",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Georgiana Dinu and Mirella Lapata. 2010. Measuring distributional similarity in context. In Proceedings of EMNLP, Cambridge, MA.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "A flexible, corpus-driven model of regular and inverse selectional preferences",
                "authors": [
                    {
                        "first": "Katrin",
                        "middle": [],
                        "last": "Erk",
                        "suffix": ""
                    },
                    {
                        "first": "Sebastian",
                        "middle": [],
                        "last": "Pad\u00f3",
                        "suffix": ""
                    },
                    {
                        "first": "Ulrike",
                        "middle": [],
                        "last": "Pad\u00f3",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Computational Linguistics",
                "volume": "",
                "issue": "4",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Katrin Erk, Sebastian Pad\u00f3, and Ulrike Pad\u00f3. 2010. A flexible, corpus-driven model of regular and inverse selectional preferences. Computational Linguistics, 36(4).",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "From distributional semantics to feature norms: Grounding semantic models in human perceptual data",
                "authors": [
                    {
                        "first": "Luana",
                        "middle": [],
                        "last": "F\u0203g\u0203r\u0203\u015fan",
                        "suffix": ""
                    },
                    {
                        "first": "Eva",
                        "middle": [
                            "Maria"
                        ],
                        "last": "Vecchi",
                        "suffix": ""
                    },
                    {
                        "first": "Stephen",
                        "middle": [],
                        "last": "Clark",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of IWCS",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Luana F\u0203g\u0203r\u0203\u015fan, Eva Maria Vecchi, and Stephen Clark. 2015. From distributional semantics to fea- ture norms: Grounding semantic models in human perceptual data. In Proceedings of IWCS, London, Great Britain.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Background to FrameNet",
                "authors": [
                    {
                        "first": "C",
                        "middle": [
                            "J"
                        ],
                        "last": "Fillmore",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [
                            "R"
                        ],
                        "last": "Johnson",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Petruck",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "International Journal of Lexicography",
                "volume": "16",
                "issue": "",
                "pages": "235--250",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "C. J. Fillmore, C. R. Johnson, and M. Petruck. 2003. Background to FrameNet. International Journal of Lexicography, 16:235-250.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Probabilistic semantics and pragmatics: Uncertainty in language and thought",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Noah",
                        "suffix": ""
                    },
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Goodman",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Lassiter",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Handbook of Contemporary Semantics",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Noah D. Goodman and Daniel Lassiter. 2014. Prob- abilistic semantics and pragmatics: Uncertainty in language and thought. In Shalom Lappin and Chris Fox, editors, Handbook of Contemporary Semantics. Wiley-Blackwell.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "What is in a text, what isn't and what this has to do with lexical semantics. Proceedings of IWCS",
                "authors": [
                    {
                        "first": "Aur\u00e9lie",
                        "middle": [],
                        "last": "Herbelot",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Aur\u00e9lie Herbelot. 2013. What is in a text, what isn't and what this has to do with lexical semantics. Pro- ceedings of IWCS.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Building a shared world:mapping distributional to modeltheoretic semantic spaces",
                "authors": [
                    {
                        "first": "Aur\u00e9lie",
                        "middle": [],
                        "last": "Herbelot",
                        "suffix": ""
                    },
                    {
                        "first": "Eva",
                        "middle": [],
                        "last": "Vecchi",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of EMNLP",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Aur\u00e9lie Herbelot and Eva Vecchi. 2015a. Building a shared world:mapping distributional to model- theoretic semantic spaces. In Proceedings of EMNLP.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Linguistic Issues in Language",
                "authors": [
                    {
                        "first": "Aur\u00e9lie",
                        "middle": [],
                        "last": "Herbelot",
                        "suffix": ""
                    },
                    {
                        "first": "Eva",
                        "middle": [
                            "Maria"
                        ],
                        "last": "Vecchi",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Technology",
                "volume": "12",
                "issue": "4",
                "pages": "1--20",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Aur\u00e9lie Herbelot and Eva Maria Vecchi. 2015b. Many speakers, many worlds. Linguistic Issues in Lan- guage Technology, 12(4):1-20.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Perceptual inference through global lexical similarity",
                "authors": [
                    {
                        "first": "T",
                        "middle": [],
                        "last": "Brendan",
                        "suffix": ""
                    },
                    {
                        "first": "Michael N Jones",
                        "middle": [],
                        "last": "Johns",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Topics in Cognitive Science",
                "volume": "4",
                "issue": "",
                "pages": "103--120",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Brendan T Johns and Michael N Jones. 2012. Percep- tual inference through global lexical similarity. Top- ics in Cognitive Science, 4(1):103-120.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Bernoulli mixture models for binary images",
                "authors": [
                    {
                        "first": "Alfons",
                        "middle": [],
                        "last": "Juan",
                        "suffix": ""
                    },
                    {
                        "first": "Enrique",
                        "middle": [],
                        "last": "Vidal",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of ICPR",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Alfons Juan and Enrique Vidal. 2004. Bernoulli mix- ture models for binary images. In Proceedings of ICPR.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Learning overhypotheses with hierarchical Bayesian models",
                "authors": [
                    {
                        "first": "Charles",
                        "middle": [],
                        "last": "Kemp",
                        "suffix": ""
                    },
                    {
                        "first": "Amy",
                        "middle": [],
                        "last": "Perfors",
                        "suffix": ""
                    },
                    {
                        "first": "Joshua",
                        "middle": [
                            "B"
                        ],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Developmental Science",
                "volume": "10",
                "issue": "3",
                "pages": "307--321",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Charles Kemp, Amy Perfors, and Joshua B. Tenen- baum. 2007. Learning overhypotheses with hier- archical Bayesian models. Developmental Science, 10(3):307-321.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "VerbNet: A broadcoverage, comprehensive verb lexicon",
                "authors": [
                    {
                        "first": "Karin",
                        "middle": [],
                        "last": "Kipper",
                        "suffix": ""
                    },
                    {
                        "first": "-",
                        "middle": [],
                        "last": "Schuler",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Karin Kipper-Schuler. 2005. VerbNet: A broad- coverage, comprehensive verb lexicon. Ph.D. thesis, Computer and Information Science Dept., Univer- sity of Pennsylvania, Philadelphia, PA.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Property of average precision and its generalization: An examination of evaluation indicator for information retrieval experiments",
                "authors": [
                    {
                        "first": "Kazuaki",
                        "middle": [],
                        "last": "Kishida",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "NII Technical Reports",
                "volume": "",
                "issue": "14",
                "pages": "1--19",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kazuaki Kishida. 2005. Property of average precision and its generalization: An examination of evaluation indicator for information retrieval experiments. NII Technical Reports, 2005(14):1-19.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge",
                "authors": [
                    {
                        "first": "Thomas",
                        "middle": [],
                        "last": "Landauer",
                        "suffix": ""
                    },
                    {
                        "first": "Susan",
                        "middle": [],
                        "last": "Dumais",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Psychological Review",
                "volume": "",
                "issue": "",
                "pages": "211--240",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Thomas Landauer and Susan Dumais. 1997. A solution to Plato's problem: The latent semantic analysis the- ory of acquisition, induction, and representation of knowledge. Psychological Review, pages 211-240.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Is this a wampimuk? Cross-modal mapping between distributional semantics and the visual world",
                "authors": [
                    {
                        "first": "Angeliki",
                        "middle": [],
                        "last": "Lazaridou",
                        "suffix": ""
                    },
                    {
                        "first": "Elia",
                        "middle": [],
                        "last": "Bruni",
                        "suffix": ""
                    },
                    {
                        "first": "Marco",
                        "middle": [],
                        "last": "Baroni",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of ACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Angeliki Lazaridou, Elia Bruni, and Marco Baroni. 2014. Is this a wampimuk? Cross-modal map- ping between distributional semantics and the visual world. In Proceedings of ACL.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Multimodal word meaning induction from minimal exposure to natural text",
                "authors": [
                    {
                        "first": "Angeliki",
                        "middle": [],
                        "last": "Lazaridou",
                        "suffix": ""
                    },
                    {
                        "first": "Marco",
                        "middle": [],
                        "last": "Marelli",
                        "suffix": ""
                    },
                    {
                        "first": "Marco",
                        "middle": [],
                        "last": "Baroni",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Cognitive Science",
                "volume": "",
                "issue": "",
                "pages": "1--30",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Angeliki Lazaridou, Marco Marelli, and Marco Baroni. 2016. Multimodal word meaning induction from minimal exposure to natural text. Cognitive Science, pages 1-30.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "One-shot learning with Bayesian networks",
                "authors": [
                    {
                        "first": "L",
                        "middle": [],
                        "last": "Andrew",
                        "suffix": ""
                    },
                    {
                        "first": "Charles",
                        "middle": [],
                        "last": "Maas",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Kemp",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proceedings of the 31st Annual Conference of the Cognitive Science Society",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Andrew L. Maas and Charles Kemp. 2009. One-shot learning with Bayesian networks. In Proceedings of the 31st Annual Conference of the Cognitive Science Society, Amsterdam, The Netherlands.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "Semantic feature production norms for a large set of living and nonliving things",
                "authors": [
                    {
                        "first": "Ken",
                        "middle": [],
                        "last": "Mcrae",
                        "suffix": ""
                    },
                    {
                        "first": "George",
                        "middle": [
                            "S"
                        ],
                        "last": "Cree",
                        "suffix": ""
                    },
                    {
                        "first": "Mark",
                        "middle": [
                            "S"
                        ],
                        "last": "Seidenberg",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Mcnorgan",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Behavior Research Methods",
                "volume": "37",
                "issue": "4",
                "pages": "547--559",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ken McRae, George S. Cree, Mark S. Seidenberg, and Chris McNorgan. 2005. Semantic feature produc- tion norms for a large set of living and nonliving things. Behavior Research Methods, 37(4):547- 559.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "Latent variable models of selectional preference",
                "authors": [
                    {
                        "first": "Diarmuid\u00f3",
                        "middle": [],
                        "last": "S\u00e9aghdha",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of ACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Diarmuid\u00d3 S\u00e9aghdha. 2010. Latent variable models of selectional preference. In Proceedings of ACL.",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "Probabilistic distributional semantics with latent variable models",
                "authors": [
                    {
                        "first": "Diarmuid\u00f3",
                        "middle": [],
                        "last": "S\u00e9aghdha",
                        "suffix": ""
                    },
                    {
                        "first": "Anna",
                        "middle": [],
                        "last": "Korhonen",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Computational Linguistics",
                "volume": "40",
                "issue": "3",
                "pages": "587--631",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Diarmuid\u00d3 S\u00e9aghdha and Anna Korhonen. 2014. Probabilistic distributional semantics with latent variable models. Computational Linguistics, 40(3):587-631.",
                "links": null
            },
            "BIBREF26": {
                "ref_id": "b26",
                "title": "A Latent Dirichlet Allocation method for selectional preferences",
                "authors": [
                    {
                        "first": "Alan",
                        "middle": [],
                        "last": "Ritter",
                        "suffix": ""
                    },
                    {
                        "first": "Mausam",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    },
                    {
                        "first": "Oren",
                        "middle": [],
                        "last": "Etzioni",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Proceedings of ACL",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Alan Ritter, Mausam, and Oren Etzioni. 2010. A La- tent Dirichlet Allocation method for selectional pref- erences. In Proceedings of ACL.",
                "links": null
            },
            "BIBREF27": {
                "ref_id": "b27",
                "title": "A multimodal lda model integrating textual, cognitive and visual modalities",
                "authors": [
                    {
                        "first": "Stephen",
                        "middle": [],
                        "last": "Roller",
                        "suffix": ""
                    },
                    {
                        "first": "Sabine",
                        "middle": [],
                        "last": "Schulte Im Walde",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings of EMNLP",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Stephen Roller and Sabine Schulte im Walde. 2013. A multimodal lda model integrating textual, cognitive and visual modalities. In Proceedings of EMNLP.",
                "links": null
            },
            "BIBREF28": {
                "ref_id": "b28",
                "title": "How well do distributional models capture different types of semantic knowledge?",
                "authors": [
                    {
                        "first": "Dana",
                        "middle": [],
                        "last": "Rubinstein",
                        "suffix": ""
                    },
                    {
                        "first": "Effi",
                        "middle": [],
                        "last": "Levi",
                        "suffix": ""
                    },
                    {
                        "first": "Roy",
                        "middle": [],
                        "last": "Schwartz",
                        "suffix": ""
                    },
                    {
                        "first": "Ari",
                        "middle": [],
                        "last": "Rappoport",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of ACL",
                "volume": "2",
                "issue": "",
                "pages": "726--730",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Dana Rubinstein, Effi Levi, Roy Schwartz, and Ari Rappoport. 2015. How well do distributional mod- els capture different types of semantic knowledge? In Proceedings of ACL, volume 2, pages 726-730.",
                "links": null
            },
            "BIBREF29": {
                "ref_id": "b29",
                "title": "Object name learning provides on-the-job training for attention",
                "authors": [
                    {
                        "first": "Linda",
                        "middle": [
                            "B"
                        ],
                        "last": "Smith",
                        "suffix": ""
                    },
                    {
                        "first": "Susan",
                        "middle": [
                            "S"
                        ],
                        "last": "Jones",
                        "suffix": ""
                    },
                    {
                        "first": "Barbara",
                        "middle": [],
                        "last": "Landau",
                        "suffix": ""
                    },
                    {
                        "first": "Lisa",
                        "middle": [],
                        "last": "Gershkoff-Stowe",
                        "suffix": ""
                    },
                    {
                        "first": "Larissa",
                        "middle": [],
                        "last": "Samuelson",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Psychological Science",
                "volume": "13",
                "issue": "1",
                "pages": "13--19",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Linda B. Smith, Susan S. Jones, Barbara Landau, Lisa Gershkoff-Stowe, and Larissa Samuelson. 2002. Object name learning provides on-the-job training for attention. Psychological Science, 13(1):13-19.",
                "links": null
            },
            "BIBREF30": {
                "ref_id": "b30",
                "title": "Probabilistic topic models",
                "authors": [
                    {
                        "first": "Mark",
                        "middle": [],
                        "last": "Steyvers",
                        "suffix": ""
                    },
                    {
                        "first": "Tom",
                        "middle": [],
                        "last": "Griffiths",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Handbook of Latent Semantic Analysis",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Mark Steyvers and Tom Griffiths. 2007. Probabilistic topic models. In T. Landauer, D.S. McNamara, S. Dennis, and W. Kintsch, eds., Handbook of Latent Semantic Analysis.",
                "links": null
            },
            "BIBREF31": {
                "ref_id": "b31",
                "title": "The British National Corpus",
                "authors": [
                    {
                        "first": "Bnc",
                        "middle": [],
                        "last": "The",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Consortium",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "The BNC Consortium. 2007. The British Na- tional Corpus, version 3 (BNC XML Edition).",
                "links": null
            },
            "BIBREF33": {
                "ref_id": "b33",
                "title": "From frequency to meaning: Vector space models of semantics",
                "authors": [
                    {
                        "first": "Peter",
                        "middle": [],
                        "last": "Turney",
                        "suffix": ""
                    },
                    {
                        "first": "Patrick",
                        "middle": [],
                        "last": "Pantel",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "Journal of Artificial Intelligence Research",
                "volume": "37",
                "issue": "",
                "pages": "141--188",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Peter Turney and Patrick Pantel. 2010. From fre- quency to meaning: Vector space models of se- mantics. Journal of Artificial Intelligence Research, 37:141-188.",
                "links": null
            },
            "BIBREF34": {
                "ref_id": "b34",
                "title": "Representing the meanings of object and action words: The featural and unitary semantic space hypothesis",
                "authors": [
                    {
                        "first": "Gabriella",
                        "middle": [],
                        "last": "Vigliocco",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Vinson",
                        "suffix": ""
                    },
                    {
                        "first": "William",
                        "middle": [],
                        "last": "Lewis",
                        "suffix": ""
                    },
                    {
                        "first": "Merrill",
                        "middle": [],
                        "last": "Garrett",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Cognitive Psychology",
                "volume": "48",
                "issue": "",
                "pages": "422--488",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Gabriella Vigliocco, David Vinson, William Lewis, and Merrill Garrett. 2004. Representing the meanings of object and action words: The featural and unitary semantic space hypothesis. Cognitive Psychology, 48:422-488.",
                "links": null
            },
            "BIBREF35": {
                "ref_id": "b35",
                "title": "Word learning as Bayesian inference",
                "authors": [
                    {
                        "first": "Fei",
                        "middle": [],
                        "last": "Xu",
                        "suffix": ""
                    },
                    {
                        "first": "Joshua",
                        "middle": [
                            "B"
                        ],
                        "last": "Tenenbaum",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Psychological Review",
                "volume": "114",
                "issue": "2",
                "pages": "245--272",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Fei Xu and Joshua B. Tenenbaum. 2007. Word learn- ing as Bayesian inference. Psychological Review, 114(2):245-272.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "text": "Plate diagram for the Bimodal Topic Model (bi-TM)",
                "uris": null,
                "type_str": "figure",
                "num": null
            },
            "TABREF1": {
                "type_str": "table",
                "num": null,
                "text": "MAP scores, multi-shot learning on the QMR and Animal datasets",
                "content": "<table/>",
                "html": null
            },
            "TABREF2": {
                "type_str": "table",
                "num": null,
                "text": "",
                "content": "<table><tr><td>shows the perfor-</td></tr></table>",
                "html": null
            },
            "TABREF4": {
                "type_str": "table",
                "num": null,
                "text": "QMR: top 5 properties of gown.",
                "content": "<table><tr><td>Top 2</td></tr></table>",
                "html": null
            },
            "TABREF5": {
                "type_str": "table",
                "num": null,
                "text": "illustrates this point fur-TM BernMix H2 0.23 \u2022 -0.37 \u2022 0.52*",
                "content": "<table><tr><td/><td>Model</td><td>Freq.</td><td colspan=\"2\">Entropy AvgCos</td></tr><tr><td/><td>Count Mult.</td><td>0.09</td><td>-0.12</td><td>0.18</td></tr><tr><td>QMR</td><td colspan=\"2\">Count BernMix H1 0.07 Count BernMix H2 0.10 bi-TM plain 0.15 bi-TM BernMix H2 0.16</td><td>-0.10 -0.09 -0.09 -0.10</td><td>0.17 0.17 0.41 \u2022 0.39 \u2022</td></tr><tr><td>Ani.</td><td>bi-TM plain bi-</td><td>0.25</td><td>-0.40</td><td>0.49*</td></tr></table>",
                "html": null
            },
            "TABREF6": {
                "type_str": "table",
                "num": null,
                "text": "Correlation of informativity with AP, Spearman's \u03c1. * and \u2022 indicate significance at p < 0.05 and p < 0.1",
                "content": "<table/>",
                "html": null
            }
        }
    }
}