File size: 97,014 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
{
    "paper_id": "2021",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T07:24:28.317917Z"
    },
    "title": "\"Politeness, you simpleton!\" retorted [MASK]: Masked prediction of literary characters",
    "authors": [
        {
            "first": "Eric",
            "middle": [],
            "last": "Holgate",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "The University of Texas at Austin",
                "location": {}
            },
            "email": "holgate@utexas.edu"
        },
        {
            "first": "Katrin",
            "middle": [],
            "last": "Erk",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "The University of Texas at Austin",
                "location": {}
            },
            "email": "katrin.erk@utexas.edu"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "What is the best way to learn embeddings for entities, and what can be learned from them? We consider this question for the case of literary characters. We address the highly challenging task of guessing, from a sentence in the novel, which character is being talked about, and we probe the embeddings to see what information they encode about their literary characters. We find that when continuously trained, entity embeddings do well at the masked entity prediction task, and that they encode considerable information about the traits and characteristics of the entities.",
    "pdf_parse": {
        "paper_id": "2021",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "What is the best way to learn embeddings for entities, and what can be learned from them? We consider this question for the case of literary characters. We address the highly challenging task of guessing, from a sentence in the novel, which character is being talked about, and we probe the embeddings to see what information they encode about their literary characters. We find that when continuously trained, entity embeddings do well at the masked entity prediction task, and that they encode considerable information about the traits and characteristics of the entities.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Neural language models have led to huge improvements across many tasks in the last few years (Peters et al., 2018; Devlin et al., 2019; Radford et al., 2019) .",
                "cite_spans": [
                    {
                        "start": 93,
                        "end": 114,
                        "text": "(Peters et al., 2018;",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 115,
                        "end": 135,
                        "text": "Devlin et al., 2019;",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 136,
                        "end": 157,
                        "text": "Radford et al., 2019)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "1 They compute embeddings for words and word pieces. But when we describe the semantics of a sentence, we talk about entities and events and their relations, not words. And it is to be expected that more complex reasoning tasks would eventually require representations at the semantic level rather than the word level. Entities differ from words in that they are persistent, localized, and variable (within a given range). So, would it be beneficial to compute embeddings of entities in addition to embeddings of words for downstream inference? And how should entity embeddings be computed? There has been a steady rise in work on entity representations and how they can be combined with language models, for example Li et al. (2016) ; ; Rashkin et al. (2018) ; Louis and Sutton (2018) .In this paper, we add to the growing literature on neural representations of entities by considering a particularly challenging case: the representations of entities in very long texts, in particular in novels. Intriguingly, Bruera (2019) recently tested whether literary characters, when represented through distributional vectors trained on the first half of a novel, can be recognized in the second half, and found the task to be near impossible. We take up that same task, but train character embeddings in a masked character prediction task. We ask the following questions. (a) Is it possible to use literary character embeddings to do masked character prediction, that is, to guess from a sentence in a novel which character it mentions? (b) If this task is doable, is it doable only locally, or can we train on the first third of a novel and then guess characters towards the end of the novel? (c) What do the resulting embeddings tell us about the literary characters when we probe them? (d) Can the embeddings identify a literary character from a short description of their personality?",
                "cite_spans": [
                    {
                        "start": 717,
                        "end": 733,
                        "text": "Li et al. (2016)",
                        "ref_id": null
                    },
                    {
                        "start": 738,
                        "end": 759,
                        "text": "Rashkin et al. (2018)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 762,
                        "end": 785,
                        "text": "Louis and Sutton (2018)",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 1012,
                        "end": 1025,
                        "text": "Bruera (2019)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "We find that when continuously trained, entity embeddings do well at the masked entity prediction task, and that they encode considerable information about the traits and characteristics of the entities. Modeling semantics for natural language understanding is about modeling entities and events, not words. So we view this work as an initial step in the direction of entity modeling over time.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Entities have been increasingly common subjects within NLP research. There has been recent work aimed at inducing both characteristics of entities, such as personalities, physical and mental states, and character traits, as well as distributed entity representations, similar to lexical embeddings.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "Psychologists have studied the relationship between personality traits and human behavior. Within NLP, there have been recent attempts to model this link computationally. Bamman et al. (2013) explored entity modeling by using Bayesian methods to induce moderately fine-grained character archetypes/stereotypes from film plot summaries. The authors utilized dependency relations to identify, for each entity, the verbs for which they were the agent, the verbs for which they were the patient, and any modifiers attributed to them. Bamman et al. successfully induced clusters that could be manually aligned with tropes like the jerk jock, the nerdy klutz, the villain, etc.",
                "cite_spans": [
                    {
                        "start": 171,
                        "end": 191,
                        "text": "Bamman et al. (2013)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Modeling Personalities and Characteristics",
                "sec_num": "2.1"
            },
            {
                "text": "Plank and Hovy (2015) recently appealed to psychological personality dimensions in relation to linguistic behavior. They constructed a dataset by crawling twitter for mentions of any of the 16 Myers-Briggs Type Indicators comprising four personality dimensions (MBTI; Myers and Myers 2010), labeling tweets with author gender identity. Plank and Hovy then train logistic regression models to predict each of the four dimensions from user tweet data using tweet context features and other features that are traditional for Twitter data (e.g., counts of tweets, followers, favorites, etc.). In all four dimensions, logistic regression classifiers outperform majority baselines, supporting the notion that linguistic behavior correlates with MBTI designations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Modeling Personalities and Characteristics",
                "sec_num": "2.1"
            },
            {
                "text": "Flekova and Gurevych (2015) similarly explored personality traits, though they utilized the Five-Factor Model of personality instead of MBTIs (John et al., 1999) . Here, authors collected extraversion/intraversion ratings for approximately 300 literary characters, and explore three sources of signal to predict the extraversion scores. The first system aligns most closely with Plank and Hovy's work as it considers only character speech (both style and content). Flekova and Gurevych go slightly farther, however, as they also show that character actions and behaviors as well as the descriptions of characters given in narration carry useful signal for extraversion prediction. Rashkin et al. (2018) modeled the mental state of characters in short stories, including motivations for behaviors and emotional reactions to events. The authors noted a substantial increase in performance in mental state classification when entity-specific contextual information was presented to the classifier, suggesting that entity-specific context may be useful to a wide array of downstream tasks. Louis and Sutton (2018) further explored the relation between character properties and actions taken in online role-playing game data. In Dungeons and Dragons, a giant is more likely than a fairy to wield a giant axe, but a fairy is more likely to be agile or cast spells. Louis and Sutton show that computational models can capture this interaction by using character description information in conjunction with action descriptions to train action and character language models. When a formal representation of a given character is included, performance improves. demonstrated dynamically tracked cooking ingredients, identifying which ingredient entity was selected in any given recipe step, and recognizing what changes in state they underwent as a result of the action described in the step. For example, these dynamic entity representations enabled the model to determine that an ingredient was clean after having been washed.",
                "cite_spans": [
                    {
                        "start": 142,
                        "end": 161,
                        "text": "(John et al., 1999)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 681,
                        "end": 702,
                        "text": "Rashkin et al. (2018)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 1086,
                        "end": 1109,
                        "text": "Louis and Sutton (2018)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Modeling Personalities and Characteristics",
                "sec_num": "2.1"
            },
            {
                "text": "Recently, major work in NLP has begun to explicitly model entities for use in downstream tasks. While still new (and limited in scope), much of this work has relied upon the notion of an Entity Library, a vocabulary of individuals which utilizes consecutive mentions to construct distributed vector representations, though methods of learning these representations have varied. Entity representations have been shown to improve the quality of generated text. In Ji et al. (2017) , researchers build a generative language model (an RNN) which has access to an entity library which contains continuous, dynamic representations of each entity mentioned in the text. The result is that the library explicitly groups coreferential mentions, and each generated mention affects the subsequently generated text.",
                "cite_spans": [
                    {
                        "start": 462,
                        "end": 478,
                        "text": "Ji et al. (2017)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Entity Representations and Entity Libraries",
                "sec_num": "2.2"
            },
            {
                "text": "Tracking entity information has also been shown to be useful for increasing the consistency of responses in dialogue agents (Li et al., 2016) . Researchers introduce a conversation model which maintains a persona, defined as the character that the artificial agent performs during conversational interactions. The persona maintains elements of identity such as background facts, linguistic behav-ior (dialect), and interaction style (personality) in continuously updated distributed representations. The model maintains the capability for the persona to be adaptive, as the agent may need to present different characteristics to different interlocutors as interactions take place, but reduces the likelihood of the model providing contradictory information (i.e., maintaining these distributed representations prevents the model from claiming to live in both Los Angeles, Madrid, and England in consecutive queries). Crucially, this desirable change is achieved without the need for a structured ontology of properties, but instead through persona embeddings that are learned jointly with word representations. Fevry et al. (2020) demonstrates that entity representations trained only from text can capture more declarative knowledge about those entities than a similarly sized BERT. Researchers showed that these representations are useful for a variety of downstream tasks, including open domain question answering, relation extraction, entity typing, and generalized knowledge tasks.",
                "cite_spans": [
                    {
                        "start": 124,
                        "end": 141,
                        "text": "(Li et al., 2016)",
                        "ref_id": null
                    },
                    {
                        "start": 1111,
                        "end": 1130,
                        "text": "Fevry et al. (2020)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Entity Representations and Entity Libraries",
                "sec_num": "2.2"
            },
            {
                "text": "Yamada et al. (2020) explore another entity masking task in the context of transformer pretraining. They train a large transformer on both masked words and masked entities in Wikipedia text. Here, however, each entity-in-context exists as its own token, rather than a representation that is aggregated over a sequence of mentions. Yamada et al. test on entity typing, relation classification, and named entity recognition.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Entity Representations and Entity Libraries",
                "sec_num": "2.2"
            },
            {
                "text": "Finally, Bruera (2019) introduces the data that we will use to build our model (described in detail below), and compares the ability to construct computational embeddings for proper names with that of common nouns. Researchers trained a distributional semantics model to create and store two different representations for literary characters in novels, each from a separate section of text from the novel. The model is then asked to match the characters' representations from one portion of text to the representations computed from the other portion of text, which the authors term the Doppelg\u00e4nger Task. Importantly, their results showed that the ability to match these representations is much reduced in the case of proper names when compared to common nouns. This insight serves as a major motivation for the current work, where we follow the hypothesis that entities can be represented in a distributional fashion after all, though not with the same training as with common nouns. 2 We assume that entity representations must be persistent, continuously available, and dynamic.",
                "cite_spans": [
                    {
                        "start": 9,
                        "end": 22,
                        "text": "Bruera (2019)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 986,
                        "end": 987,
                        "text": "2",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Entity Representations and Entity Libraries",
                "sec_num": "2.2"
            },
            {
                "text": "3 Data",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Entity Representations and Entity Libraries",
                "sec_num": "2.2"
            },
            {
                "text": "In the current paper, we present a model that is able to construct entity representations for characters in classic literary novels. Novels are a compelling environment for this exploration as they feature a relatively small number of entities that appear frequently over a long document. To this end, we turn to the Novel Aficionados dataset introduced by Bruera (2019). The dataset comprises 62 pieces of classic literature, represented as both their original texts (deemed the OriginalNovels dataset; these texts are distributed by Project Gutenberg, which maintains a repository of free eBooks of works no longer protected by copyright), and their English Wikipedia summaries (the WikiNovels dataset). In order to have sufficient description of as many characters as possible, we only utilize the corpus of original novels in training our representations, as this corpus yields significantly more mentions per character. We utilize the Wikipedia summaries as a test set to determine how well our entity representations work outside the domain of the novels themselves.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Entity Representations and Entity Libraries",
                "sec_num": "2.2"
            },
            {
                "text": "The novels are distributed within the dataset in both their original form and having been preprocessed with BookNLP (Bamman et al., 2014) . BookNLP is a natural language processing pipeline that extends from Stanford CoreNLP (Manning et al., 2014) and is specifically aimed at scaling to books and other long documents. BookNLP includes part of speech tagging, dependency parsing, NER, and supersense tagging. Most critical to our application, BookNLP provides quotation speaker identification, pronominal coreference resolution, and character name clustering. This means that, in addition to standard anaphoric coreference resolution, BookNLP can identify different proper names as character aliases (i.e., Jane Fairfax, a character from Jane Austen's Emma, is referenced throughout the text not only by her full name, but also Jane, Miss Jane Fairfax, and Miss Fairfax; BookNLP is able to recognize this and map all of these aliases to a single, unique character ID). Concerning the quality of the coreference resolution, Bamman et al. report average accuracy of 82.7% in a 10-fold cross-validation experiment on predicting the nearest antecedent for a pronominal anaphor. While the accuracy of character clustering was not evaluated, manual inspection of the data revealed it to be very reliable.",
                "cite_spans": [
                    {
                        "start": 116,
                        "end": 137,
                        "text": "(Bamman et al., 2014)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 225,
                        "end": 247,
                        "text": "(Manning et al., 2014)",
                        "ref_id": "BIBREF12"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Entity Representations and Entity Libraries",
                "sec_num": "2.2"
            },
            {
                "text": "Our hypothesis is that it is possible to represent characters in a novel through an embedding in such a way that it is possible for a model to recognize who is who, or, as we call the task here, Is this Me?. Bruera (2019) found that with an approach that treated characters like common nouns, the related Doppelg\u00e4nger Task was not feasible. 4 We hypothesize that if embeddings are learned to best facilitate Is this Me? prediction, the task will be feasible. We further hypothesize that the resulting embeddings can be found to contain information about the characters. In a way, our approach is similar to recent contextualized language models like BERT (Devlin et al., 2019 ) in that we, too, train on a masked prediction task, and we, too, hope to find the resulting embeddings to be useful beyond the prediction task itself.",
                "cite_spans": [
                    {
                        "start": 208,
                        "end": 221,
                        "text": "Bruera (2019)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 341,
                        "end": 342,
                        "text": "4",
                        "ref_id": null
                    },
                    {
                        "start": 655,
                        "end": 675,
                        "text": "(Devlin et al., 2019",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Modeling",
                "sec_num": "4"
            },
            {
                "text": "4.1 A model for the \"Is this Me?\" task",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Modeling",
                "sec_num": "4"
            },
            {
                "text": "Our model keeps track of characters as they appear in a novel, and trains an embedding for each character through the Is this Me? task, a masked prediction task: Given a sentence of the novel with a masked character mention, and given the current embedding for character c, a classifier decides whether this is a mention of c or not. This is a binary task. The embedding for each character is updated incrementally as the novel is read, and as such, the entity embeddings are learned directly by the model. 4 Although related, the Is this Me? and Doppelg\u00e4nger tasks are truly different in nature. As such, we cannot compare results on the Is this Me? task to results on the Doppelg\u00e4nger Task directly. from the data. The classifier weights are updated alongside the character embeddings.",
                "cite_spans": [
                    {
                        "start": 507,
                        "end": 508,
                        "text": "4",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Modeling",
                "sec_num": "4"
            },
            {
                "text": "Because the classifier weights are learned as the model reads the novels, we read all novels in parallel. The classifier is trained on a binary masked prediction task, where negative examples are drawn from the same novel. (That is, a negative example for Emma in the novel Emma might be Harriet, but it would never be Heathcliff.) A sketch of the model is shown in Figure 1 .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 366,
                        "end": 374,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Modeling",
                "sec_num": "4"
            },
            {
                "text": "Entity Library. The entity library, shown in blue in Figure 1 is a collection of embeddings of literary characters, each represented by a 300 dimensional embedding learned incrementally throughout the novel. Entity embeddings are randomly initialized and passed through a projection layer (green in the figure) before being received by the classifier.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 53,
                        "end": 61,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Modeling",
                "sec_num": "4"
            },
            {
                "text": "Contextual Sentence and Target Mention Representations. We utilize the base, uncased distribution of BERT to compute contextualized sentence representations of each target sentence, shown in orange in Figure 1 . Contextualized sentence representations are truncated to a maximum of 150 subword tokens. 5 We do not fine tune BERT on our data. All character mentions in a sentence are masked. The input to the classifier is a target representation of one of the masked entity mentions, using mix representations introduced in (Tenney et al., 2019). The target mention representation is computed directly from the contextualized sentence representations obtained from BERT and is a scalar mix of the layer activations using learned scalars.",
                "cite_spans": [
                    {
                        "start": 302,
                        "end": 303,
                        "text": "5",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 201,
                        "end": 209,
                        "text": "Figure 1",
                        "ref_id": "FIGREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Modeling",
                "sec_num": "4"
            },
            {
                "text": "Is this Me? Binary Classifier. The classifier for the binary Is this Me? task takes as input an entity embedding, transformed through the projection layer, along with a target mention embedding from BERT as described above. The classifier consists of a single, ReLU activation layer. We keep the classifier this simple intentionally, as, to be successful, we want the entity representations to do the heavy lifting.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Modeling",
                "sec_num": "4"
            },
            {
                "text": "Training Data. We restrict our modeling to characters that appear at least 10 times to ensure that there is enough information to train a representation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model details",
                "sec_num": "4.2"
            },
            {
                "text": "As our intent is to induce entity representations for each character, we must mask each character mention. For each mention of any character predicted by BookNLP in each sentence in a novel, we replace the mention with a single [MASK] token in order to obscure the character's identity from the model. Multiword mentions are reduced to a single [MASK] token in order to prevent the model from being able to detect signal from mention length. Masking is applied to any mention, even for characters that appear fewer than 10 times.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model details",
                "sec_num": "4.2"
            },
            {
                "text": "For each sentence in a novel that contains at least one character mention, we produce at least two examples for the model: one positive example from the gold data, and one hallucinated example by randomly selecting a confound character from the same novel. If a character is mentioned more than one time in the same sentence, one mention is randomly selected to be the target mention for that character in that sentence. If a sentence talks about more than one character, a single positive example is generated for each character. Consider this sentence from Jane Austen's Emma:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model details",
                "sec_num": "4.2"
            },
            {
                "text": "Whenever [MASK] James goes over to see [MASK] James' daughter, you know, [MASK] Miss Taylor will be hearing of us.",
                "cite_spans": [
                    {
                        "start": 39,
                        "end": 45,
                        "text": "[MASK]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model details",
                "sec_num": "4.2"
            },
            {
                "text": "We first have to decide whether to first generate examples for James or for Miss Taylor. We pick one of the two at random, let us assume it is James. We next randomly select one of the two mentions of James to be the target mention. Let us say we pick the first. The input to the model for the posi-tive example is then the Tenney et al. (2019) mix representation of the target mention concatenated with the current entity representation of James. We then construct a negative example by randomly selecting a character other than James to serve as a confound, following standard practice. If, for example, we were to sample Isabella (Emma's sister), the input to the model for the negative example from this mention would be the exact same mix representation of the target mention concatenated with the current entity embedding of the confound character, Isabella. With positive and negative examples constructed for James's mention, we then turn to the remaining character, Miss Taylor, and construct a positive and negative example for her mention.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model details",
                "sec_num": "4.2"
            },
            {
                "text": "Note that restricting the possible set of confounds for a given character to characters in the same novel, we have created a more difficult negative example than if we were to sample across all novels. For example, telling the difference between Elizabeth Bennet and Jane Bennet (both from Austen's Pride and Prejudice) is significantly more difficult than telling the difference between Elizabeth Bennet and the Cowardly Lion (from Baum's The Wonderful Wizard of Oz).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model details",
                "sec_num": "4.2"
            },
            {
                "text": "Training. All learned weights (the entity embeddings themselves, those in the projection layer, the scalars guiding the target mention representation, and those in the classifier) are updated with respect to cross entropy loss, optimized with Adam (Kingma and Ba, 2015) at a learning rate of 2e-05.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Model details",
                "sec_num": "4.2"
            },
            {
                "text": "We first address our questions (a) and (b) from above: Is it possible to predict a masked mention of a literary character from an entity embedding, either within the same novel or in a summary of the same novel? And does performance degrade if we \"skip ahead\", using a character embedding trained on the beginning of a novel to predict a mention near the end?",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Is this Me?",
                "sec_num": "5.1"
            },
            {
                "text": "We begin by allowing the model to train entity representations (and all other learned weights) continuously throughout each novel. This means that we treat each example as a test example, and only allow the model to update based on its performance on a given example after its prediction has been made, as in a standard learning curve. As such, although the model is updated after every example, our performance statistics are computed over its prediction made before the update operation (meaning there is no performance computed over alreadyseen examples). As Table 2 shows, the model does well at this task, with overall accuracy across all characters and all novels of 74.37%. Accuracy was consistent across positive and negative examples. Most learning happened quickly within the first 50,000 examples, though accuracy did continue to increase through the entire run ( Figure 2 ). As should be expected, overall accuracy at the book level in this task is subject to frequency effects. Book-level accuracy exhibits strong positive correlation with the total number of examples per novel (r = 0.584; p \u226a 0.01; Figure 3, left) . Interestingly, however, book-level accuracy also increases with the number of characters modeled per book (r = 0.500; p \u226a 0.01; Figure 3, right) .",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 562,
                        "end": 569,
                        "text": "Table 2",
                        "ref_id": "TABREF3"
                    },
                    {
                        "start": 875,
                        "end": 883,
                        "text": "Figure 2",
                        "ref_id": "FIGREF1"
                    },
                    {
                        "start": 1114,
                        "end": 1129,
                        "text": "Figure 3, left)",
                        "ref_id": "FIGREF2"
                    },
                    {
                        "start": 1260,
                        "end": 1276,
                        "text": "Figure 3, right)",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "Continuous Training",
                "sec_num": "5.1.1"
            },
            {
                "text": "To see whether the model is affected by language differences between older and more recent books, we used linear regression to predict book-level accuracy from novel publication date, finding very low correlation (R 2 = 0.008; p = 0.663). At the character level, frequency effects were not nearly as strong, except in cases where characters were mentioned very frequently (defined here as characters with over 300 mentions). Across all characters, testing showed moderate positive correlation with mention frequency (r = 0.174; p \u226a 0.01; Figure 4 , left). Within frequently appearing characters, correlation with mention frequency was much higher (r = 0.633; p \u226a 0.01; Figure 4 , right). ",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 538,
                        "end": 546,
                        "text": "Figure 4",
                        "ref_id": "FIGREF3"
                    },
                    {
                        "start": 669,
                        "end": 677,
                        "text": "Figure 4",
                        "ref_id": "FIGREF3"
                    }
                ],
                "eq_spans": [],
                "section": "Continuous Training",
                "sec_num": "5.1.1"
            },
            {
                "text": "We also explored the extent to which the entity representations after having been trained on the full novel, could identify the same entities in short summaries of the same novel. To that end, we used the the WikiNovel summaries distributed with the Novel Aficionados dataset. The summaries show a strong domain shift compared to the novels. While they frequently do contain proper, if succinct, descriptions of the novel's plot and the involvement of major characters, they also exhibit significantly different patterns of language. Wikipedia entries do not just summarize the novels, they also frequently include metadiscursive language, as in this sentence from the WikiNovels summary of Austen's Emma:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Applicability to Novel Summaries",
                "sec_num": "5.1.2"
            },
            {
                "text": "This point of view appears both as something perceived by [emma woodhouse] an external perspective on events and characters that the reader encounters as and when [emma woodhouse] recognises it and as an independent discourse appearing in the text alongside the discourse of the narrator and characters.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Applicability to Novel Summaries",
                "sec_num": "5.1.2"
            },
            {
                "text": "Because of this shift in domain, we see vastly reduced performance in character prediction and a heavy bias towards claiming the target mention is not a given character when using the model trained on the sentences from the novel. This is shown in Table 3 . We evaluated the model in two settings. In the Pos. Only setting, all data points were positives, such that the model would have perfect accuracy by always saying yes. In the Pos. & Neg. setting, we use the same negative example generation technique as used in the model's training. While the model performs slightly better than chance when negative examples are included, it remains clear that future work should explore ways to generalize the entity representations such that they may be more informative across domain boundaries. ",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 248,
                        "end": 255,
                        "text": "Table 3",
                        "ref_id": "TABREF5"
                    }
                ],
                "eq_spans": [],
                "section": "Applicability to Novel Summaries",
                "sec_num": "5.1.2"
            },
            {
                "text": "In \u00a72 we noted that identifying masked character mentions is a not trivial due to the nature of narratives themselves. Literary plots are often constructed to force profound change in the behaviors, beliefs, and characteristics of central characters. This may be among the reasons that Bruera (2019) reported such difficulty with the Doppelg\u00e4nger task. To see if distance in the novel affects our representations, we experimented with \"skipping ahead\" in the novel in order to determine the impact on performance when entities are not continuously updated.",
                "cite_spans": [
                    {
                        "start": 286,
                        "end": 299,
                        "text": "Bruera (2019)",
                        "ref_id": "BIBREF3"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Non-Continuous Training",
                "sec_num": "5.1.3"
            },
            {
                "text": "Inspired by traditional character arcs, we split each novel into three sections of equal length (determined by number of sentences). The underlying assumption is that, due to the structure of the narrative, each character (especially main characters) will undergo some form of growth or change in between each novel section, suggesting that the learned entity representations should never be static in order to encode the results of that growth. We allowed a new Is this Me? classifier to learn representations for all literary entities using only the first third of the novels as training data, then froze the entity embeddings, and evaluated classifier performance against the middle and final thirds independently. We hypothesized that the model would exhibit a gradual decrease in performance as it moved further away from the point in time at which the entity representations were fixed, with the performance on the middle third better than performance toward the ends of the novels. Instead, we found a fairly rapid decline in performance (Table 4) . Performance stays above chance, however, suggesting there is a kernel of each representation that is informative regardless. While this experiment does not explicitly demonstrate character development/change, the sharp decrease in performance when entity representations are fixed implicitly supports the claim that such change is present. Capturing that development directly, however, is another very difficult task and well-worthy of being the subject of future work. ",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 1045,
                        "end": 1054,
                        "text": "(Table 4)",
                        "ref_id": "TABREF7"
                    }
                ],
                "eq_spans": [],
                "section": "Non-Continuous Training",
                "sec_num": "5.1.3"
            },
            {
                "text": "We have found that, at least when entity embeddings are continuously trained, they can be used to predict a masked mention in a novel with reasonable accuracy. But are the resulting embeddings useful beyond the masked entity prediction task? To find this out, we turn to our questions (c) and (d) from above, and see if we can predict character gender from entity representation, and whether the identity of a character can be predicted from a description.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Probing Entity Embeddings",
                "sec_num": "5.2"
            },
            {
                "text": "We used a simple logistic regression model to probe the extent to which gender is encoded in the entity representations obtained from the continuous training in \u00a75.1.1. As we have no gold annotation of literary character gender, we utilize the BookNLP preprocessing to look for gendered pronouns (she/he) for each character as a form of distant supervision. Manual inspection shows this heuristic to be very reliable after omitting characters for which no pronominal coreference link is available and characters who exhibit coreference chains featuring both gendered pronouns. This left a total of 2,195 characters (1,533 male, 662 female) to be considered for this experiment. We learn a single weight for each embedding dimension for a total of 300 weights. In each case, we trained the classifiers on 80% of the characters across all novels (1,756 characters), leaving a test set of 439 characters. Each model was run four times, and we present the mean performance statistics in Table 5 . Results were favorable across all runs, suggesting the learned character representations do encapsulate some knowledge of literary character gender.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 983,
                        "end": 990,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Predicting Gender from Literary Character Representation",
                "sec_num": "5.2.1"
            },
            {
                "text": "\u00b5 Acc \u00b5 MSE \u00b5 F1 60.15% 0.3984 0.7208 Table 5 : Model performance on predicting character gender from entity embeddings: Accuracy, mean squared error, and F1.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 38,
                        "end": 45,
                        "text": "Table 5",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Predicting Gender from Literary Character Representation",
                "sec_num": "5.2.1"
            },
            {
                "text": "While the WikiNovels corpus is noisy and cluttered with metadiscursive literary commentary, as noted in \u00a75.1.2, certain Wikipedia novel summaries do contain detailed descriptions of major characters. To better evaluate the ability of our learned entity representations to generalize outside of the domain of the novels on which they where trained, we manually extracted a subset of sentences which more readily pertained to our research question.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Character Descriptions",
                "sec_num": "5.2.2"
            },
            {
                "text": "We isolated five novels which featured clean character descriptions within their summaries: Jane Austen's Emma, Charles Dickens's A Tale of Two Cities and Great Expectations, Fyodor Dostoevsky's The Brothers Karamazov, and Charlotte Bront\u00eb's Jane Eyre. From the character descriptions within these summaries we generated a total of 605 Is this Me?-style examples (positive and negative).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Character Descriptions",
                "sec_num": "5.2.2"
            },
            {
                "text": "6 The pre-trained classifier exhibited performance above chance (61.63% accuracy), and a surprising ability to handle challenging out of domain sentences. While the model successfully predicted a high level description of Emma Woodhouse (Table 6 ; Row 1), it struggled with a similar description of Estella Havisham (Row 2). The model was also able to identify a character based on the description of a pivotal plot point (Row 3), but unsurprisingly struggled with more critical descriptions (Row 4).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 237,
                        "end": 245,
                        "text": "(Table 6",
                        "ref_id": "TABREF9"
                    }
                ],
                "eq_spans": [],
                "section": "Character Descriptions",
                "sec_num": "5.2.2"
            },
            {
                "text": "In the ideal case, an entity embedding would constitute a compact representation of a person, their character traits and life story, and would allow for inferences about that person, including story arcs in which that person is likely to occur. What is the best way to learn embeddings for entities, and what can be learned from them? We have considered this question for the case of literary characters. We have trained entity embeddings through a masked prediction task, reading a collection of novels from begininng to end. We found that when trained continuously, the entity embeddings did well at the Is this Me? task: Given a target entity embedding, and given a sentence of the novel with a masked entity mention, is this a mention of the target entity?",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "The Is this Me? task becomes much harder when we \"skip ahead\", training only on the first third of a novel and then evaluating on the middle and end. The task also becomes much harder when applied to Wikipedia summaries of novels, which show a marked domain difference from the novels themselves. Probing the entity embeddings that result from the masked prediction task, we find that they encode a good amount of information about the She is the golden thread after whom book the second is named so called because [MASK] holds her father and her family lives together and because of her blond hair like her mother. entities. The gender of the literary character can in many cases be recovered from the embedding, and it is even often possible to identify a person from a Wikipedia description of their characteristic traits.",
                "cite_spans": [
                    {
                        "start": 515,
                        "end": 521,
                        "text": "[MASK]",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "Looking ahead, the training regime and trained embeddings allow for many further analyses. We would like to probe further into the \"skipping ahead\" to better understand why it is so difficult. Intuitively, characters that undergo more development across the length of a novel should be more difficult to predict. It is not clear to what extent this is the case with the current model; this needs further study. In addition, we would like to model the change and development of characters more explicitly, for example by representing them as a trajectory over time rather than a single embedding. It would also be beneficial to further explore the ways in which character traits are implicitly present within entity representations learned from the Is this Me? task. While we attempted to probe this superficially via the evaluation on out-of-domain Wikipedia data, this data does not offer the annotation that would be necessary to perform a more in-depth analysis",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "We would also like to extend the model by including additional relevant input. At the moment, we essentially ask the model to bootstrap entity representations from scratch, using only the contextualized sentence representations produced by BERT and the current entity representations as input. Other useful information such as semantic relations (retrievable via dependency parse) may be useful. We may also consider the kind of events and modifiers that a given entity participates in to be able to exploit patterns across character archetypes (similar to Bamman et al. (2014) ). We are also looking to extend the model to directly model relations between characters as relations between entity embeddings, to see whether this would help performance and to see to what extent the interpersonal relations of characters would be encoded in their embeddings.",
                "cite_spans": [
                    {
                        "start": 557,
                        "end": 577,
                        "text": "Bamman et al. (2014)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "Overall, we find the results presented in the current paper to be promising as a first step towards natural language understanding systems that use neural models of entities over time. As we have outlined here, however, there is still much work to be done.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion",
                "sec_num": "6"
            },
            {
                "text": "The[MASK] in the title is actually La Carconte, from the Count of Monte Cristo by Alexandre Dumas.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "2 While our work is inspired byBruera (2019) and conducted on the same data, we introduce a different task that is not directly comparable to the Doppelg\u00e4nger Task.3 Unfortunately, the texts are not distributed with more general coreference resolution (outside of character aliases and pronominal resolution). This means we are unable to include nominal expressions as character mentions to be considered",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "This limit was determined by inspecting the length of each sentence in the corpus in subword tokens and permits nearly all sentences to remain untruncated.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "This set of examples may be found at http://www.katrinerk.com/home/software-and-data.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "This research was supported by the DARPA AIDA program under AFRL grant FA8750-18-2-0017. We acknowledge the Texas Advanced Computing Center for providing grid resources that contributed to these results, and results presented in this paper were obtained using the Chameleon testbed supported by the National Science Foundation. We would like to thank the anonymous reviewers for their valuable feedback, as well as Jessy Li and Pengxiang Cheng.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": "7"
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Learning latent personas of film characters",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Bamman",
                        "suffix": ""
                    },
                    {
                        "first": "Brendan",
                        "middle": [],
                        "last": "Oconnor",
                        "suffix": ""
                    },
                    {
                        "first": "Noah A",
                        "middle": [],
                        "last": "Smith",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "352--361",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David Bamman, Brendan OConnor, and Noah A Smith. 2013. Learning latent personas of film characters. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 352-361.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "A Bayesian mixed effects model of literary character",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Bamman",
                        "suffix": ""
                    },
                    {
                        "first": "Ted",
                        "middle": [],
                        "last": "Underwood",
                        "suffix": ""
                    },
                    {
                        "first": "Noah",
                        "middle": [],
                        "last": "Smith",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David Bamman, Ted Underwood, and Noah Smith. 2014. A Bayesian mixed effects model of literary character. In Proceedings of the 52nd Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers).",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Simulating action dynamics with neural process networks",
                "authors": [
                    {
                        "first": "Antoine",
                        "middle": [],
                        "last": "Bosselut",
                        "suffix": ""
                    },
                    {
                        "first": "Omer",
                        "middle": [],
                        "last": "Levy",
                        "suffix": ""
                    },
                    {
                        "first": "Ari",
                        "middle": [],
                        "last": "Holtzman",
                        "suffix": ""
                    },
                    {
                        "first": "Corin",
                        "middle": [],
                        "last": "Ennis",
                        "suffix": ""
                    },
                    {
                        "first": "Dieter",
                        "middle": [],
                        "last": "Fox",
                        "suffix": ""
                    },
                    {
                        "first": "Yejin",
                        "middle": [],
                        "last": "Choi",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the 6th International Conference for Learning Representations",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Antoine Bosselut, Omer Levy, Ari Holtzman, Corin En- nis, Dieter Fox, and Yejin Choi. 2018. Simulating action dynamics with neural process networks. In Proceedings of the 6th International Conference for Learning Representations (ICLR).",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Modelling the semantic memory of proper names with distributional semantics. Master's thesis",
                "authors": [
                    {
                        "first": "Andrea",
                        "middle": [],
                        "last": "Bruera",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Andrea Bruera. 2019. Modelling the semantic memory of proper names with distributional semantics. Mas- ter's thesis, Universita degli Studi di Trento.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
                "authors": [
                    {
                        "first": "Jacob",
                        "middle": [],
                        "last": "Devlin",
                        "suffix": ""
                    },
                    {
                        "first": "Ming-Wei",
                        "middle": [],
                        "last": "Chang",
                        "suffix": ""
                    },
                    {
                        "first": "Kenton",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    },
                    {
                        "first": "Kristina",
                        "middle": [],
                        "last": "Toutanova",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "1",
                "issue": "",
                "pages": "4171--4186",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/N19-1423"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Entities as experts: Sparse memory access with entity supervision",
                "authors": [
                    {
                        "first": "Thibault",
                        "middle": [],
                        "last": "Fevry",
                        "suffix": ""
                    },
                    {
                        "first": "Baldini",
                        "middle": [],
                        "last": "Livio",
                        "suffix": ""
                    },
                    {
                        "first": "Nicholas",
                        "middle": [],
                        "last": "Soares",
                        "suffix": ""
                    },
                    {
                        "first": "Eunsol",
                        "middle": [],
                        "last": "Fitzgerald",
                        "suffix": ""
                    },
                    {
                        "first": "Tom",
                        "middle": [],
                        "last": "Choi",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Kwiatkowski",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
                "volume": "",
                "issue": "",
                "pages": "4937--4951",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/2020.emnlp-main.400"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Thibault Fevry, Livio Baldini Soares, Nicholas FitzGer- ald, Eunsol Choi, and Tom Kwiatkowski. 2020. En- tities as experts: Sparse memory access with entity supervision. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 4937-4951. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Personality profiling of fictional characters using sense-level links between lexical resources",
                "authors": [
                    {
                        "first": "Lucie",
                        "middle": [],
                        "last": "Flekova",
                        "suffix": ""
                    },
                    {
                        "first": "Iryna",
                        "middle": [],
                        "last": "Gurevych",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "1805--1816",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lucie Flekova and Iryna Gurevych. 2015. Personal- ity profiling of fictional characters using sense-level links between lexical resources. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1805-1816.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Dynamic entity representations in neural language models",
                "authors": [
                    {
                        "first": "Yangfeng",
                        "middle": [],
                        "last": "Ji",
                        "suffix": ""
                    },
                    {
                        "first": "Chenhao",
                        "middle": [],
                        "last": "Tan",
                        "suffix": ""
                    },
                    {
                        "first": "Sebastian",
                        "middle": [],
                        "last": "Martschat",
                        "suffix": ""
                    },
                    {
                        "first": "Yejin",
                        "middle": [],
                        "last": "Choi",
                        "suffix": ""
                    },
                    {
                        "first": "Noah",
                        "middle": [
                            "A"
                        ],
                        "last": "Smith",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
                "volume": "",
                "issue": "",
                "pages": "1830--1839",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/D17-1195"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Yangfeng Ji, Chenhao Tan, Sebastian Martschat, Yejin Choi, and Noah A. Smith. 2017. Dynamic entity representations in neural language models. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 1830- 1839, Copenhagen, Denmark. Association for Com- putational Linguistics.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "The Big Five trait taxonomy: History, measurement, and theoretical perspectives",
                "authors": [
                    {
                        "first": "Sanjay",
                        "middle": [],
                        "last": "Oliver P John",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Srivastava",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Handbook of personality: Theory and research",
                "volume": "2",
                "issue": "",
                "pages": "102--138",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Oliver P John, Sanjay Srivastava, et al. 1999. The Big Five trait taxonomy: History, measurement, and the- oretical perspectives. Handbook of personality: The- ory and research, 2(1999):102-138.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Adam: A method for stochastic optimization",
                "authors": [
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Diederik",
                        "suffix": ""
                    },
                    {
                        "first": "Jimmy",
                        "middle": [],
                        "last": "Kingma",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Ba",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model",
                "authors": [
                    {
                        "first": "Jiwei",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    },
                    {
                        "first": "Michel",
                        "middle": [],
                        "last": "Galley",
                        "suffix": ""
                    },
                    {
                        "first": "Chris",
                        "middle": [],
                        "last": "Brockett",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Georgios P Sp- ithourakis, Jianfeng Gao, and Bill Dolan. 2016. A persona-based neural conversation model.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Deep Dungeons and Dragons: Learning character-action interactions from role-playing game transcripts",
                "authors": [
                    {
                        "first": "Annie",
                        "middle": [],
                        "last": "Louis",
                        "suffix": ""
                    },
                    {
                        "first": "Charles",
                        "middle": [],
                        "last": "Sutton",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "2",
                "issue": "",
                "pages": "708--713",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Annie Louis and Charles Sutton. 2018. Deep Dun- geons and Dragons: Learning character-action in- teractions from role-playing game transcripts. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 708-713.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "The Stanford CoreNLP natural language processing toolkit",
                "authors": [
                    {
                        "first": "Christopher",
                        "middle": [
                            "D"
                        ],
                        "last": "Manning",
                        "suffix": ""
                    },
                    {
                        "first": "Mihai",
                        "middle": [],
                        "last": "Surdeanu",
                        "suffix": ""
                    },
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Bauer",
                        "suffix": ""
                    },
                    {
                        "first": "Jenny",
                        "middle": [],
                        "last": "Finkel",
                        "suffix": ""
                    },
                    {
                        "first": "Steven",
                        "middle": [
                            "J"
                        ],
                        "last": "Bethard",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Mc-Closky",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Association for Computational Linguistics (ACL) System Demonstrations",
                "volume": "",
                "issue": "",
                "pages": "55--60",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Association for Compu- tational Linguistics (ACL) System Demonstrations, pages 55-60.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Gifts Differing: Understanding Personality Type",
                "authors": [
                    {
                        "first": "Isabel",
                        "middle": [
                            "Briggs"
                        ],
                        "last": "Myers",
                        "suffix": ""
                    },
                    {
                        "first": "Peter B",
                        "middle": [],
                        "last": "Myers",
                        "suffix": ""
                    }
                ],
                "year": 2010,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Isabel Briggs Myers and Peter B Myers. 2010. Gifts Differing: Understanding Personality Type. Nicholas Brealey.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Deep contextualized word representations",
                "authors": [
                    {
                        "first": "Matthew",
                        "middle": [],
                        "last": "Peters",
                        "suffix": ""
                    },
                    {
                        "first": "Mark",
                        "middle": [],
                        "last": "Neumann",
                        "suffix": ""
                    },
                    {
                        "first": "Mohit",
                        "middle": [],
                        "last": "Iyyer",
                        "suffix": ""
                    },
                    {
                        "first": "Matt",
                        "middle": [],
                        "last": "Gardner",
                        "suffix": ""
                    },
                    {
                        "first": "Christopher",
                        "middle": [],
                        "last": "Clark",
                        "suffix": ""
                    },
                    {
                        "first": "Kenton",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    },
                    {
                        "first": "Luke",
                        "middle": [],
                        "last": "Zettlemoyer",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "1",
                "issue": "",
                "pages": "2227--2237",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/N18-1202"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Personality traits on Twitter -or-how to get 1,500 personality tests in a week",
                "authors": [
                    {
                        "first": "Barbara",
                        "middle": [],
                        "last": "Plank",
                        "suffix": ""
                    },
                    {
                        "first": "Dirk",
                        "middle": [],
                        "last": "Hovy",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of the 6th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis",
                "volume": "",
                "issue": "",
                "pages": "92--98",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Barbara Plank and Dirk Hovy. 2015. Personality traits on Twitter -or-how to get 1,500 personality tests in a week. In Proceedings of the 6th Workshop on Computational Approaches to Subjectivity, Sen- timent and Social Media Analysis, pages 92-98.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Language models are unsupervised multitask learners",
                "authors": [
                    {
                        "first": "Alec",
                        "middle": [],
                        "last": "Radford",
                        "suffix": ""
                    },
                    {
                        "first": "Jeff",
                        "middle": [],
                        "last": "Wu",
                        "suffix": ""
                    },
                    {
                        "first": "Rewon",
                        "middle": [],
                        "last": "Child",
                        "suffix": ""
                    },
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Luan",
                        "suffix": ""
                    },
                    {
                        "first": "Dario",
                        "middle": [],
                        "last": "Amodei",
                        "suffix": ""
                    },
                    {
                        "first": "Ilya",
                        "middle": [],
                        "last": "Sutskever",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Modeling naive psychology of characters in simple commonsense stories",
                "authors": [
                    {
                        "first": "Hannah",
                        "middle": [],
                        "last": "Rashkin",
                        "suffix": ""
                    },
                    {
                        "first": "Antoine",
                        "middle": [],
                        "last": "Bosselut",
                        "suffix": ""
                    },
                    {
                        "first": "Maarten",
                        "middle": [],
                        "last": "Sap",
                        "suffix": ""
                    },
                    {
                        "first": "Kevin",
                        "middle": [],
                        "last": "Knight",
                        "suffix": ""
                    },
                    {
                        "first": "Yejin",
                        "middle": [],
                        "last": "Choi",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
                "volume": "1",
                "issue": "",
                "pages": "2289--2299",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/P18-1213"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Hannah Rashkin, Antoine Bosselut, Maarten Sap, Kevin Knight, and Yejin Choi. 2018. Modeling naive psychology of characters in simple common- sense stories. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 2289- 2299, Melbourne, Australia. Association for Com- putational Linguistics.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "What do you learn from context? probing for sentence structure in contextualized word representations",
                "authors": [
                    {
                        "first": "Ian",
                        "middle": [],
                        "last": "Tenney",
                        "suffix": ""
                    },
                    {
                        "first": "Patrick",
                        "middle": [],
                        "last": "Xia",
                        "suffix": ""
                    },
                    {
                        "first": "Berlin",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "Alex",
                        "middle": [],
                        "last": "Wang",
                        "suffix": ""
                    },
                    {
                        "first": "Adam",
                        "middle": [],
                        "last": "Poliak",
                        "suffix": ""
                    },
                    {
                        "first": "Thomas",
                        "middle": [],
                        "last": "Mccoy",
                        "suffix": ""
                    },
                    {
                        "first": "Najoung",
                        "middle": [],
                        "last": "Kim",
                        "suffix": ""
                    },
                    {
                        "first": "Benjamin",
                        "middle": [],
                        "last": "Van Durme",
                        "suffix": ""
                    },
                    {
                        "first": "Sam",
                        "middle": [],
                        "last": "Bowman",
                        "suffix": ""
                    },
                    {
                        "first": "Dipanjan",
                        "middle": [],
                        "last": "Das",
                        "suffix": ""
                    },
                    {
                        "first": "Ellie",
                        "middle": [],
                        "last": "Pavlick",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "International Conference on Learning Representations",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang, Adam Poliak, R Thomas McCoy, Najoung Kim, Benjamin Van Durme, Sam Bowman, Dipanjan Das, and Ellie Pavlick. 2019. What do you learn from context? probing for sentence structure in contextu- alized word representations. In International Con- ference on Learning Representations.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "LUKE: Deep contextualized entity representations with entityaware self-attention",
                "authors": [
                    {
                        "first": "Ikuya",
                        "middle": [],
                        "last": "Yamada",
                        "suffix": ""
                    },
                    {
                        "first": "Akari",
                        "middle": [],
                        "last": "Asai",
                        "suffix": ""
                    },
                    {
                        "first": "Hiroyuki",
                        "middle": [],
                        "last": "Shindo",
                        "suffix": ""
                    },
                    {
                        "first": "Hideaki",
                        "middle": [],
                        "last": "Takeda",
                        "suffix": ""
                    },
                    {
                        "first": "Yuji",
                        "middle": [],
                        "last": "Matsumoto",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
                "volume": "",
                "issue": "",
                "pages": "6442--6454",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/2020.emnlp-main.523"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Ikuya Yamada, Akari Asai, Hiroyuki Shindo, Hideaki Takeda, and Yuji Matsumoto. 2020. LUKE: Deep contextualized entity representations with entity- aware self-attention. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 6442-6454, On- line. Association for Computational Linguistics.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "Sketch of the model. The sentence is from Jane Austen's \"Emma\". All characters are masked. In this case, the first character is Knightley, the second -the target -is Emma. Blue: entity library. Red: Is this Me? classifier."
            },
            "FIGREF1": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "Is this Me? Continuous Training learning curve."
            },
            "FIGREF2": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "Is this Me? Continuous Training -Book-Level Accuracy. Accuracy within book (y-axis) is plotted against the number of examples for that book (xaxis)."
            },
            "FIGREF3": {
                "num": null,
                "type_str": "figure",
                "uris": null,
                "text": "Is this Me? Continuous Training -Character-Level Accuracy. Accuracy within character (y-axis) is plotted against the number of examples for that character (x-axis)."
            },
            "TABREF1": {
                "content": "<table><tr><td>Alexei Fyodorovich</td><td/></tr><tr><td>Karamazov</td><td>True/False</td></tr><tr><td>Heathcliff</td><td/></tr><tr><td/><td>Is this Me?</td></tr><tr><td>Emma Woodhouse</td><td/></tr><tr><td>Harriet Smith</td><td>span</td></tr><tr><td/><td>repr.</td></tr><tr><td/><td>{</td></tr><tr><td>BERT</td><td/></tr></table>",
                "type_str": "table",
                "text": "[MASK], in fact, was one of the few people who could see faults in[MASK].",
                "num": null,
                "html": null
            },
            "TABREF3": {
                "content": "<table/>",
                "type_str": "table",
                "text": "Is this Me? accuracy for continuously trained entity representations.",
                "num": null,
                "html": null
            },
            "TABREF5": {
                "content": "<table/>",
                "type_str": "table",
                "text": "Is this Me? accuracy for continuously trained entity representations on WikiNovel summaries.",
                "num": null,
                "html": null
            },
            "TABREF7": {
                "content": "<table/>",
                "type_str": "table",
                "text": "Is this Me? accuracy on novels split into thirds.",
                "num": null,
                "html": null
            },
            "TABREF8": {
                "content": "<table><tr><td colspan=\"2\">Novel</td><td>Target</td><td colspan=\"3\">Candidate Result Sentence</td></tr><tr><td colspan=\"6\">Emma [Great Emma Emma + Estella Estella -She hates all men and plots to wreak twisted</td></tr><tr><td colspan=\"2\">Expecta-</td><td/><td/><td/><td>revenge by teaching [MASK] to torment and</td></tr><tr><td>tions</td><td/><td/><td/><td/><td>spurn men, including Pip who loves her.</td></tr><tr><td>A</td><td>Tale</td><td>Miss</td><td>Miss</td><td>+</td><td>[MASK] permanently loses her hearing when</td></tr><tr><td colspan=\"2\">of Two</td><td>Pross</td><td>Pross</td><td/><td>the fatal pistol shot goes off during her climac-</td></tr><tr><td colspan=\"2\">Cities</td><td/><td/><td/><td>tic fight with Madame Defarge.</td></tr><tr><td>A</td><td>Tale</td><td>Lucy</td><td>Lucy</td><td>-</td><td/></tr><tr><td colspan=\"2\">of Two</td><td>Manette</td><td>Manette</td><td/><td/></tr><tr><td colspan=\"2\">Cities</td><td/><td/><td/><td/></tr></table>",
                "type_str": "table",
                "text": "MASK] the protagonist of the story is beautiful high spirited intelligent and lightly spoiled young woman from the landed gentry.",
                "num": null,
                "html": null
            },
            "TABREF9": {
                "content": "<table/>",
                "type_str": "table",
                "text": "Examples of the Is this Me? continuously trained classifier's performance on out-of-domain masked mentions found within the WikiNovels corpus. Non-target mentions have been de-masked for better readability.",
                "num": null,
                "html": null
            }
        }
    }
}