File size: 127,693 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
2322
2323
2324
2325
2326
2327
2328
2329
{
    "paper_id": "2020",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T07:28:29.496680Z"
    },
    "title": "Amplifying the Range of News Stories with Creativity: Methods and their Evaluation, in Portuguese",
    "authors": [
        {
            "first": "Rui",
            "middle": [],
            "last": "Mendes",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Coimbra",
                "location": {
                    "country": "Portugal"
                }
            },
            "email": ""
        },
        {
            "first": "Hugo",
            "middle": [
                "Gon\u00e7alo"
            ],
            "last": "Oliveira",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "University of Coimbra",
                "location": {
                    "country": "Portugal"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "Headlines are key for attracting people to a story, but writing appealing headlines requires time and talent. This work aims to automate the production of creative short texts (e.g., news headlines) for an input context (e.g., existing headlines), thus amplifying its range. Well-known expressions (e.g., proverbs, movie titles), which typically include word-play and resort to figurative language, are used as a starting point. Given an input text, they can be recommended by exploiting Semantic Textual Similarity (STS) techniques, or adapted towards higher relatedness. For the latter, three methods that exploit static word embeddings are proposed. Experimentation in Portuguese lead to some conclusions, based on human opinions: STS methods that look exclusively at the surface text, recommend more related expressions; resulting expressions are somewhat related to the input, but adaptation leads to higher relatedness and novelty; humour can be an indirect consequence, but most outputs are not funny.",
    "pdf_parse": {
        "paper_id": "2020",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "Headlines are key for attracting people to a story, but writing appealing headlines requires time and talent. This work aims to automate the production of creative short texts (e.g., news headlines) for an input context (e.g., existing headlines), thus amplifying its range. Well-known expressions (e.g., proverbs, movie titles), which typically include word-play and resort to figurative language, are used as a starting point. Given an input text, they can be recommended by exploiting Semantic Textual Similarity (STS) techniques, or adapted towards higher relatedness. For the latter, three methods that exploit static word embeddings are proposed. Experimentation in Portuguese lead to some conclusions, based on human opinions: STS methods that look exclusively at the surface text, recommend more related expressions; resulting expressions are somewhat related to the input, but adaptation leads to higher relatedness and novelty; humour can be an indirect consequence, but most outputs are not funny.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Each minute, new stories are published online, listed in news aggregators, and spread through social media. With the amount of information each of us is constantly bombed with, most people end up looking at the headlines and, only sporadically, opening and reading the full text. We may thus say that headlines play a key role in this process: the more appealing they are, the higher the probability of someone actually reading their story. And while personal preferences are relevant, creativity and familiarity (e.g., text resembles a funny saying or situation) certainly contribute to higher appeal.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "A common strategy for making the headline more catchy is to reuse expressions known by a general audience, sometimes also attempting at a humorous effect. If the expression is related enough, it can be used directly, but it may also suffer minor adaptations, to become more related to the context and still resemble the original saying. This is also common in news satire tv shows, like The Daily Show. While the host is telling a story, in one of the top corners of the screen, a short text (e.g., a book, movie title, proverb, idiomatic expression, or their adaptation), often accompanied by an image, complements the scene.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "However, writing catchy headlines requires time and talent. Even if resorting to well-known expressions is a possible shortcut, it is still a knowledge intensive task (i.e., on folk or pop culture) and requires creativity skills. Therefore, in this paper, we propose to automate this process and assess different unsupervised methods for, given a short text (e.g., a news headline): (i) recommending a related known expression (e.g., a proverb) from a set; (ii) adapting a known expression so that it becomes related to the input. In any case, it should be possible to use the resulting expression as an alternative, though more creative, headline, sub-title, or, at least, a comment in social media. Methods tested for recommendation are mostly baselines for Semantic Textual Similarity (STS) (Agirre et al., 2012) , including some based on the surface text, others based on static word embeddings, and on BERT (Devlin et al., 2019) contextual embeddings. As for the adaptation, static word embeddings are exploited for replacing some words in the expressions, taken directly from the headline, or based on similarity or analogy.",
                "cite_spans": [
                    {
                        "start": 794,
                        "end": 815,
                        "text": "(Agirre et al., 2012)",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 912,
                        "end": 933,
                        "text": "(Devlin et al., 2019)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Proposed methods were tested with a set of news headlines, as input text, and a set of proverbs and movie titles. We have worked with Portuguese, and even though the resources and tools used are for the target language, theoretically, the methods are language-independent. The development of automatic approaches for producing creative artefacts, sometimes based on a given context or cur-rent events, is not new. The proposed adaptation methods have some novelty, when compared to the well-established methods for recommendation. However, here they are used with proverbs, which poses additional challenges, such as the frequent utilisation of figurative language. Finally, we target Portuguese, a language for which work of this kind is still scarce.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "For better understanding how successful this approach was, results of this Portuguese instantiation were assessed. Due to the underlying subjectivity in the appreciation of the results, evaluations (one for recommendation, another for adaptation) are based on the opinion of human judges, who scored pairs of headlines and expressions by different methods. This further enabled to compare methods and draw some conclusions, such as: on average, results of recommendation methods are not significantly different from each other, but using expressions that share words with the headline seems to increase the perceived relatedness; resulting expressions are somewhat related to the input, and adaptation leads to higher relatedness and novelty; humour can be an indirect consequence, but most outputs are not funny.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "In the remainder of this paper, we overview work on the generation of creative text, focusing on those inspired by news or current events. We then describe our approaches for amplifying news stories with creative text and their evaluation: first, for the recommendation of known expressions, including the comparison of different methods; second, we propose three methods for adapting such expressions. We conclude with a brief discussion and some cues for future work.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Computational Creativity (CC) is at the intersection of Artificial Intelligence and scientific areas like cognitive psychology, philosophy, and the arts. One of its ends is to develop creative systems, i.e., computational systems that exhibit behaviours deemed as creative by unbiased observers (Colton and Wiggins, 2012) , e.g., they can produce new music or visual art, among others, including text.",
                "cite_spans": [
                    {
                        "start": 295,
                        "end": 321,
                        "text": "(Colton and Wiggins, 2012)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "Work on linguistically-creative systems has tackled the generation of textual artefacts like narrative (Gerv\u00e1s et al., 2019) , poetry Chrismartin and Manurung, 2015; Gon\u00e7alo Oliveira, 2017) , memorable news headlines (Gatti et al., 2015; Veale et al., 2017; Alnajjar et al., 2019) and slogans (Alnajjar and Toivonen, 2020) , or humour (Binsted and Ritchie, 1997; Valitutti et al., 2016) . Among those, some were adapted for producing new textual content based on current events. Poetry generation has been: guided by dependency relations in a single news story (Chrismartin and Manurung, 2015) ; inspired by the mood and related similes in a set of news stories, possibly including sentences from one of them ; or inspired by Twitter trends, associated words and semantic relations (Gon\u00e7alo Oliveira, 2017) .",
                "cite_spans": [
                    {
                        "start": 103,
                        "end": 124,
                        "text": "(Gerv\u00e1s et al., 2019)",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 134,
                        "end": 165,
                        "text": "Chrismartin and Manurung, 2015;",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 166,
                        "end": 189,
                        "text": "Gon\u00e7alo Oliveira, 2017)",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 217,
                        "end": 237,
                        "text": "(Gatti et al., 2015;",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 238,
                        "end": 257,
                        "text": "Veale et al., 2017;",
                        "ref_id": null
                    },
                    {
                        "start": 258,
                        "end": 280,
                        "text": "Alnajjar et al., 2019)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 293,
                        "end": 322,
                        "text": "(Alnajjar and Toivonen, 2020)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 335,
                        "end": 362,
                        "text": "(Binsted and Ritchie, 1997;",
                        "ref_id": "BIBREF7"
                    },
                    {
                        "start": 363,
                        "end": 386,
                        "text": "Valitutti et al., 2016)",
                        "ref_id": "BIBREF33"
                    },
                    {
                        "start": 561,
                        "end": 593,
                        "text": "(Chrismartin and Manurung, 2015)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 782,
                        "end": 806,
                        "text": "(Gon\u00e7alo Oliveira, 2017)",
                        "ref_id": "BIBREF18"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "Work on increasing the creativity, and thus memorability, of news headlines often resorts to wellknown expressions, with which readers are familiar, including slogans, movie or song titles. On this context, the creativity of an automated journalism system was increased with the injection of known expressions and figurative language (Alnajjar et al., 2019) , e.g., the headline \"Biggest gains for The Christian Democrats across Lapin vaalipiiri\" may become \"Biggest gains for The Christian Democrats, as powerful as a soldier, across Lapin vaalipiiri'. Human-written headlines were also blended with well-known expressions, through the substitution of words in the expression with keywords from the headline (Gatti et al., 2015) , e.g., \"What the Euro is coming to\" for the original headline \"UK anger at 1.7bn EU cash demand\". To reduce the risk of producing outputs with a detached meaning, a threshold was set on the cosine similarity between the original and the produced headline. Also relying on vector semantics and on the cosine, previously generated metaphors have been paired with the current news (Veale et al., 2017) , e.g., the news tweet \"@newtgingrich says 'the country will become enraged' if the violent protests at @realDonaldTrump rallies continue\" was paired with metaphors like What is a radical but a crusading demagogue?.",
                "cite_spans": [
                    {
                        "start": 334,
                        "end": 357,
                        "text": "(Alnajjar et al., 2019)",
                        "ref_id": "BIBREF2"
                    },
                    {
                        "start": 709,
                        "end": 729,
                        "text": "(Gatti et al., 2015)",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 1109,
                        "end": 1129,
                        "text": "(Veale et al., 2017)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "When it comes to the generation of creative text, word embeddings are useful resources. In fact, the operations of similarity, neighborhood, theme and analogy have been formalised for adapting text by lexical replacement, with increased creativity, given a set of intentions (Bay et al., 2017) . In our work, we also propose methods for adapting wellknown expressions, based on a textual input (context), more like Gatti et al. (2015) , with word embeddings exploited in the selection of suitable replacements. Although any text should work, news headlines were also used.",
                "cite_spans": [
                    {
                        "start": 275,
                        "end": 293,
                        "text": "(Bay et al., 2017)",
                        "ref_id": "BIBREF6"
                    },
                    {
                        "start": 415,
                        "end": 434,
                        "text": "Gatti et al. (2015)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "Humour can be a consequence of the former approaches, but it is rarely tackled specifically. There is, however, work on humour generation, also relying on lexical substitution in short texts (Valitutti et al., 2016) . In this case, messages, e.g., \"Later we go where to eat?\" may become \"Later we go where to shit?\". Replacement words should have the same part-of-speech of the original, a similar sound or writing and, the more efficient constraint, is that they are taboos. Hossain et al. (2019) present a corpus of original news headlines and their manuallyedited funny versions. While such a corpus may be useful for many tasks, including humor generation, funny headlines are often out-of-context (e.g., \"EU says summit with Turkey provides no answers to concerns\" becomes \"EU says gravy with Turkey provides no answers to concerns\"), which is different from our goal.",
                "cite_spans": [
                    {
                        "start": 191,
                        "end": 215,
                        "text": "(Valitutti et al., 2016)",
                        "ref_id": "BIBREF33"
                    },
                    {
                        "start": 476,
                        "end": 497,
                        "text": "Hossain et al. (2019)",
                        "ref_id": "BIBREF23"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "Another important difference of our work is that we experiment and present results for the Portuguese language, while the majority of the aforementioned systems processes and produces English text. For Portuguese, related work is less, but still covers: the automatic generation of poetry (Gon\u00e7alo Oliveira, 2017); riddles (Gon\u00e7alo Oliveira and Rodrigues, 2018), with the help of lexical-semantic knowledge; and memes (Gon\u00e7alo Oliveira et al., 2016) , based on current news and rules for selecting an image macro and adapting the text.",
                "cite_spans": [
                    {
                        "start": 418,
                        "end": 449,
                        "text": "(Gon\u00e7alo Oliveira et al., 2016)",
                        "ref_id": "BIBREF19"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "A related task is that of headline generation for a given document (see e.g., (Takase et al., 2016) ), which has some similarities with automatic summarization (Banko et al., 2000) and may involve a deeper understanding of the news story. This is, however, different from our goal, as our starting point are existing headlines, for which new creative alternatives are recommended or produced.",
                "cite_spans": [
                    {
                        "start": 78,
                        "end": 99,
                        "text": "(Takase et al., 2016)",
                        "ref_id": "BIBREF32"
                    },
                    {
                        "start": 160,
                        "end": 180,
                        "text": "(Banko et al., 2000)",
                        "ref_id": "BIBREF5"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "Alternatively to text generation, some systems simply recommend (famous) quotes (Ahn et al., 2016) , or retrieve interactions in movie subtitles (Ameixa et al., 2014) , to be used in dialogues. This may involve training an encoder-decoder network (Shang et al., 2015) in dialogues where target quotes are used. Though, an unsupervised retrievalbased approach is also possible, e.g., relying on vector semantics and pretrained language models for computing Semantic Textual Similarity (Agirre et al., 2012; Cer et al., 2017) , and retrieving the most similar quote. In this work, we also test different unsupervised methods for the direct recom-mendation of well-known expressions.",
                "cite_spans": [
                    {
                        "start": 80,
                        "end": 98,
                        "text": "(Ahn et al., 2016)",
                        "ref_id": "BIBREF1"
                    },
                    {
                        "start": 145,
                        "end": 166,
                        "text": "(Ameixa et al., 2014)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 247,
                        "end": 267,
                        "text": "(Shang et al., 2015)",
                        "ref_id": "BIBREF31"
                    },
                    {
                        "start": 484,
                        "end": 505,
                        "text": "(Agirre et al., 2012;",
                        "ref_id": "BIBREF0"
                    },
                    {
                        "start": 506,
                        "end": 523,
                        "text": "Cer et al., 2017)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "The first part of this work tackled the recommendation of suitable expressions for a given short text. A good example of our goal is taken from the 17th May 2020 edition of the Portuguese satire news tv show Isto\u00e9 Gozar com Quem Trabalha: when presenting a story about the plan of the Portuguese President to take a swim in the sea after the Covid-19 lockdown, they used the text \"O Velho e o Mar\" (The Old Man and the Sea, a book by Ernest Hemingway). For this, we tested different methods for computing the semantic similarity between news headlines, the input text, and Portuguese proverbs, the expressions to output. All methods are unsupervised and can be seen as baselines for STS. Once all available proverbs are ranked according to the input, the first, which maximises similarity, is recommended. This section describes the tested methods, how they were used in this experimentation, their results and evaluation by human judges.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Recommendation of Creative Text in Context",
                "sec_num": "3"
            },
            {
                "text": "The following STS methods were covered: shallow methods that consider only the surface text; methods based on sparse vector representations of the text, with word counts and TF-IDF weighting; methods based on static word embeddings, with sentences represented by the average embedding, weighted with TF-IDF or not; and methods based on contextual embeddings that encode each sentence in a single vector. Overall, eight methods were tested: Jaccard Similarity; CountVectorizer; TfIdfVectorizer; GloVe (Pennington et al., 2014) embeddings, with and without TF-IDF; FastText-CBOW (Bojanowski et al., 2017) embeddings with and without TF-IDF; and BERT (Devlin et al., 2019) embeddings. All methods return a value between 0 and 1, proportional to the similarity between the two sentences. By definition, Jaccard Similarity already does this, while, for the other methods, this value is given by the cosine similarity between the vector representations of each text.",
                "cite_spans": [
                    {
                        "start": 500,
                        "end": 525,
                        "text": "(Pennington et al., 2014)",
                        "ref_id": "BIBREF27"
                    },
                    {
                        "start": 577,
                        "end": 602,
                        "text": "(Bojanowski et al., 2017)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 648,
                        "end": 669,
                        "text": "(Devlin et al., 2019)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "3.1"
            },
            {
                "text": "With the News API 1 , a set of 60 news headlines was gathered from online editions of Portuguese news-papers. The source of expressions was a corpus of 1,617 Portuguese proverbs, obtained from project Natura 2 . For all methods but BERT, headlines and proverbs were tokenized with the NLPyPort package (Ferreira et al., 2019) , a layer on top of NLTK (Loper and Bird, 2002) for better handling Portuguese.",
                "cite_spans": [
                    {
                        "start": 302,
                        "end": 325,
                        "text": "(Ferreira et al., 2019)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 351,
                        "end": 373,
                        "text": "(Loper and Bird, 2002)",
                        "ref_id": "BIBREF24"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimentation Setup",
                "sec_num": "3.2"
            },
            {
                "text": "Sparse-vector representations of sentences and the computation of TF-IDF were based on the corpus of proverbs. For GloVe and FastText, we used pretrained models for Portuguese, both with 300sized vectors, respectively from NILC (Hartmann et al., 2017) and fastText.cc 3 repositories. Finally, for BERT, we used a pretrained multilingual model that covers 104 languages, including Portuguese, BERT-Base, Multilingual Cased 4 .",
                "cite_spans": [
                    {
                        "start": 228,
                        "end": 251,
                        "text": "(Hartmann et al., 2017)",
                        "ref_id": "BIBREF22"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimentation Setup",
                "sec_num": "3.2"
            },
            {
                "text": "All methods were implemented in Python, with the help of the following packages:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimentation Setup",
                "sec_num": "3.2"
            },
            {
                "text": "scikit-learn (Pedregosa et al., 2011) , for the CountVectorizer (Count) and TfIdfVectorizer (TFIDF); gensim (\u0158eh\u016f\u0159ek and Sojka, 2010), for handling the static word embeddings; and bert-as-a-service 5 , for loading and using BERT for encoding sentences.",
                "cite_spans": [
                    {
                        "start": 13,
                        "end": 37,
                        "text": "(Pedregosa et al., 2011)",
                        "ref_id": "BIBREF26"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimentation Setup",
                "sec_num": "3.2"
            },
            {
                "text": "As it happens with many other creative outputs, assessing the suitability of a proverb to a headline is a subjective task, which cannot be automatised. Therefore, to compare the performance of each method in the proposed scenario, we relied on human opinions. For this, proverbs were recommended by each method for each of the 60 headlines in the gathered set. Several surveys were then created with Google Forms, each having ten of those headlines followed by the headline's recommended proverbs, at most eight, randomly distributed, without repetitions or any identification with regard to the method.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "3.3"
            },
            {
                "text": "We then asked 24 Portuguese-speaking volunteers to answer the surveys. Each survey was answered by four different judges, who scored the proverbs recommended for ten headlines, according to their relatedness and funniness, with the following questions: (a) \"How would you rate the relation between the proverbs and the news title?\" [Not related (1); Remotely related (2); Considerably related (3); Extremely related (4)]; (b) \"In re-lation to the headline, how funny is each proverb?\" [Not funny (1); Remotely funny (2); Considerably funny (3); Extremely funny (4)]. Volunteers were not informed that the expressions were proverbs nor that they had been recommended automatically. Table 1 shows the distribution of scores for recommendations by each method, according to human opinions. It also includes the median (x), which is, nevertheless, not very discriminant. We omit both GloVe and FastText with TF-IDF because their scores were not much different.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 681,
                        "end": 688,
                        "text": "Table 1",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "3.3"
            },
            {
                "text": "Out of the possible scores for relatedness, not related (1) was always the most common. A curious outcome is that pretrained embeddings, like GloVe, FastText and, especially, BERT, do not make much qualitative difference on the results. In fact, most judges gave higher scores to proverbs that share one or more words with the headline, which does not always happen when word or sentence embeddings are involved (e.g., first two examples in table 2). We recall that most proverbs are highly figurative, meaning that models trained in general language may struggle to interpret them.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "3.3"
            },
            {
                "text": "A deeper look shows that only two methods had at least 10% recommendations with average relatedness scores higher than 3.5, namely the Jaccard Similarity (12%) and TFIDF (10%); and only three recommendations had the highest average relatedness, namely the first three examples in table 2.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "3.3"
            },
            {
                "text": "Funniness scores are not much different, and only two recommendations got the highest average score from all judges, including the last two examples in table 2. Though not intended, both use taboo words, which should have contributed to their high score. This also suggests that we can increase funniness by forcing the presence of such words (Valitutti et al., 2016) . On the other hand, it comes at the cost of lower relatedness.",
                "cite_spans": [
                    {
                        "start": 343,
                        "end": 367,
                        "text": "(Valitutti et al., 2016)",
                        "ref_id": "BIBREF33"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "3.3"
            },
            {
                "text": "According the previous evaluation, it is not easy to find suitable proverbs for a given headline. This could possibly be improved if the corpus of proverbs were increased. However, headlines can be significantly different and, virtually, on any topic, so another option is to start from any known expression and adapt it, so that it becomes more related to the headline, while still resembling the original expression.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Adaptation of Expressions to a Context",
                "sec_num": "4"
            },
            {
                "text": "This is not new and is also a commonly adopted strategy in news satires. For instance, in the 5th N\u00e3o contes com o ovo no cu da galinha.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Adaptation of Expressions to a Context",
                "sec_num": "4"
            },
            {
                "text": "(Do not count with the egg in the chicken's butt.) Jaccard 1.5 4 April 2020 edition of Isto\u00e9 Gozar com Quem Trabalha, the expression \"Droga de Elite\" (Elite drug), an adaptation of the Brazilian movie title \"Tropa de Elite\" (Elite Squad), illustrated a story on Covid-19 in the Brazilian favelas, where drug dealers were ensuring that residents followed the sanitary rules. This section describes three methods that explore static word embeddings in the automatic adaptation of a given expression, to suit, as much as possible, an input short text, in such a way that it can be used for transmitting or complementing the same idea, though more creatively. After presenting the methods, we describe an experiment where Portuguese proverbs and movie titles were adapted for news headlines, followed by their results and evaluation.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Adaptation of Expressions to a Context",
                "sec_num": "4"
            },
            {
                "text": "We propose three automatic methods for adapting a known expression for a given short text. Besides a list of well-known expressions, to be modified according to the input text (e.g., news headline), all methods: (i) exploit a pretrained model of static word embeddings; (ii) assume that the most relevant words in a text are previously computed; (iii) go through all the expressions in a list and try to make adaptations guided by the most relevant words of both the expressions and the input texts. Methods only differ on the adopted strategies for selecting the word(s) to replace.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "4.1"
            },
            {
                "text": "The first method, Substitution, replaces the most relevant word in the expression, a, by a word from the input text, b, or by a word similar to b. Our intuition is that, by using a relevant word of the input text, the meaning of the expression becomes more semantically-related to the given context.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "4.1"
            },
            {
                "text": "The second method, Analogy, relies on a common operation for computing analogies in word embeddings, i.e., b \u2212 a + a * = b * (Mikolov et al., 2013) , phrased as b * is to b as a * is to a. The strategy is to use the two most relevant words in the expression as a and a * , the most relevant word in the input as b, and then: (i) from the previous three, compute a new word b * ; (ii) in the original expression, replace a and a * by b and b * , respectively. Given that both pairs of words are analogouslyrelated, our intuition is that the result will still make sense and be more related to the input text.",
                "cite_spans": [
                    {
                        "start": 125,
                        "end": 147,
                        "text": "(Mikolov et al., 2013)",
                        "ref_id": "BIBREF25"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "4.1"
            },
            {
                "text": "The third method, Vector Difference (VecDiff), also selects the two most relevant words in the input text, b and b * , and then: (i) computes the vector between the previous b \u2212 b * ; (ii) identifies the pair of open-class words in the expression, a and a * , such that a \u2212 a * maximises the (cosine) similarity with b \u2212 b * ; (iii) replaces a and a * by b and b * , respectively. Our intuition is that the new text will not only use two words of the input, and thus be more related, but also that they will be included in such a way that their relation is roughly preserved.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "4.1"
            },
            {
                "text": "To avoid syntactic inconsistencies, in any method, replacement candidates must match the morphology of the replaced word, including partof-speech (PoS), gender and number, according to a morphology lexicon or a PoS tagger. If morphology does not match, the lexicon can be further used to inflect the candidate to the target form. Moreover, the set of possible replacements can be augmented by considering not only the relevant words in the input, but also their most semantically-similar, computed in the embeddings, e.g., in the Substitution method, a can be replaced by a word different but semantically similar to b. Table 3 shows an example of the application of each method, including the original headline, a proverb and the resulting output. Replaced words and their replacements are underlined. In the first example, b = bancos replaces a = amigos. In the second, comeces = apontar \u2212 deixes + f azer.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 620,
                        "end": 627,
                        "text": "Table 3",
                        "ref_id": "TABREF4"
                    }
                ],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "4.1"
            },
            {
                "text": "In the third, f ere \u2212 f erido \u2248 f inge \u2212 detido.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Methods",
                "sec_num": "4.1"
            },
            {
                "text": "Although the proposed methods are language dependent, we focused again on Portuguese. This time, we used the same \u22481,600 proverbs as in the previous section, but added about 2,500 movie titles in Portuguese, obtained from IMDB 6 . For better identification of the titles, we discarded the 25% oldest and all others with less than four words. Moreover, to avoid the inclusion of English names, all words in the title had to be in a Portuguese morphology lexicon.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimentation Setup",
                "sec_num": "4.2"
            },
            {
                "text": "Regarding pre-processing, NLPyPort (Ferreira et al., 2019) was used for tokenization, PoS tagging and lemmatization, and the morphology lexicon LABEL-Lex (Ranchhod et al., 1999) for information on word inflections. Relevant words were considered to be the least frequent, but appearing, in the newspaper corpus CETEMP\u00fablico (Rocha and Santos, 2000) . For word embeddings, we used the same GloVe model as in the previous section.",
                "cite_spans": [
                    {
                        "start": 35,
                        "end": 58,
                        "text": "(Ferreira et al., 2019)",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 154,
                        "end": 177,
                        "text": "(Ranchhod et al., 1999)",
                        "ref_id": "BIBREF28"
                    },
                    {
                        "start": 324,
                        "end": 348,
                        "text": "(Rocha and Santos, 2000)",
                        "ref_id": "BIBREF29"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimentation Setup",
                "sec_num": "4.2"
            },
            {
                "text": "For experimentation, we used a set of 100 news headlines on different topics, posted between April and May 2020, in the Twitter accounts of Portuguese newspapers. An initial set was randomly selected, but darker headlines (e.g., about death) were manually excluded later, to make 100. For each headline, new texts were produced with the previous three methods. However, application to each headline could result in several new texts.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimentation Setup",
                "sec_num": "4.2"
            },
            {
                "text": "Even if, due to the morphology constraints, some expressions end up not being used, others will. Plus, for each relevant word, we considered the top-5 similar. Thus, the same method may produce several variations of the same text for the same input, one for each.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimentation Setup",
                "sec_num": "4.2"
            },
            {
                "text": "For selecting a single expression, the recommendation methods described in section ?? are used in the beginning and in the end of the process. First, given a headline, they select the subset of the full corpus of expressions, in which each adaptation method will be applied to. Since the previous evaluation (section 3.3) did not help in the choice of a single recommendation method, we used two significantly different methods for this selection: TF-IDF and BERT. More precisely, given the headline, this subset will contain the top-30 expressions recommended by TF-IDF, the top-30 recommended by BERT, plus a random selection of 30, for higher diversity. This should still result in several (adapted) expressions, where a single one has to be selected from. This final selection is again made with the help of TF-IDF or BERT, which rank the results, allowing us to use only the top-ranked, i.e., the most similar to the headline, as long as it is not equals to an existing expression. Thus, depending on the final recommendation method, the resulting expression may be different.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimentation Setup",
                "sec_num": "4.2"
            },
            {
                "text": "To get insights on the suitability of the proposed methods, we relied again on human opinions. Since text was being changed automatically, it was now important to assess syntax, i.e., whether the resulting expression had grammatical or structural issues, that could make it difficult to interpret. However, syntax is less subjective than the other criteria, so it was assessed by the authors in a set of expressions produced for another subset of 30 headlines. This was enough to conclude that there were only syntactic issues in a minority of the produced expressions -3% for Substitution, 8% for Analogy, 13% in Vector Difference -and the majority of these issues did not have much impact on interpretability.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.3"
            },
            {
                "text": "Furthermore, we decided to assess an important aspect of creativity, which is novelty. In addition to being related to the input text, results should also be novel, in the sense that they have not been produced before or associated with the input text. Ideally, in addition to those, it would be nice again if the new expression had the potential of making the reader laugh (funniness).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.3"
            },
            {
                "text": "To have resulting expressions scored according to the evaluation criteria, a survey was again created, with a headline in each page, followed by expressions from eight different processes, namely the result of adapting the TF-IDF or BERT recommendations to the output of the three adaptation methods (6), plus expressions recommended directly by TF-IDF and BERT, with no adaptation (2). The latter were included to enable a comparison between reusing well-known expressions directly or with an adaptation, also having in mind that, in section 3, the headlines were different and so was the evaluation scale.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.3"
            },
            {
                "text": "For each resulting expression, human judges were asked to score the relatedness, by selecting one of the following: (1) There is no relation at all between the generated expression and the input;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.3"
            },
            {
                "text": "(2) The expression is somewhat related to the input, because it shares one or two words, or other contextual aspects; (3) The expression is clearly related to the input's context, could replace it or be used as a comment. Novelty, could be scored as: (1) I knew the expression well before reading it;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.3"
            },
            {
                "text": "(2) Reminds me of some expression, but has some changes; (3) The expression is completely new to me. Finally, for funniness, the options were:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.3"
            },
            {
                "text": "(1) The expression is not funny and should make no one laugh; (2) The expression is somewhat funny or could be, depending on the reader's subjective view;",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.3"
            },
            {
                "text": "(3) The expression is very funny and has a great potential to make people laugh.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.3"
            },
            {
                "text": "This time, we had 100 Portuguese-speaking volunteers, each answering the previous questions for five headlines and their eight expressions, in such a way that each expression was scored by five different judges, which were not informed that the expressions were produced automatically. Table 4 has the distribution of scores given in the human evaluation, for text produced by each method.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 286,
                        "end": 293,
                        "text": "Table 4",
                        "ref_id": "TABREF6"
                    }
                ],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.3"
            },
            {
                "text": "We recall that one of the motivations for adapting text instead of simply reusing it, was to increase relatedness. Even though the impact was not as high as expected, this trend is confirmed by the higher proportion of the highest scores (2 and 3) for the adaptation methods on this criteria. This is more clear if we look at results using the same method for the final recommendation. Moreover, figures suggest again that TF-IDF leads to better relatedness, because texts recommended by this method have a higher proportion of results scored with 3 than those by BERT. In section 3.3, we pointed out that this can be due to BERT recommendations sharing less words with the context. Yet, in the future we should also explore different ways of using BERT for this purpose.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.3"
            },
            {
                "text": "When using TF-IDF in the final recommendation, about a third of the results is highly related to the headline, which is ok, especially if, before utilisation, results can be manually selected out of the top-ranked. Curiously, the proportion of completely related results is higher for the simplest method, Substitution, and lower for the Vector Difference. The latter is also the only of the three methods that, in this scenario, has more than a third of results considered unrelated to their headline. Table 5 presents three high-scored examples, with the adaptation method, the input headline, the original expression and the output expression, with considered words underlined and average scores in the three assessed criteria.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 503,
                        "end": 510,
                        "text": "Table 5",
                        "ref_id": "TABREF8"
                    }
                ],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.3"
            },
            {
                "text": "On the remaining criteria, adapted expressions are more novel, a key aspect for a creative system that shows we can go further than simply reusing text. This was expected because we are comparing newly created expressions with proverbs and movie titles, most of which are part of our culture and known by many people. At the same time, even though they are new, resulting expressions should still resemble existing expressions, which is why not all are scored with 3. For the same reason, novelty of the recommendation methods was lower, but not always 1, possibly because some judges did not know all the expressions. This, however, could have also influenced, positively, the scores of the adaptation methods, i.e., if the judge does not know the original expression, they will also not associate its adaptation with an expression they previously heard. On this criteria, BERT did not make much difference when applied to the adaptation methods, but novelty of its recommendations is significantly higher than for TF-IDF. We also note that the ranking of the adaptation methods according to novelty is the inverse of the relatedness, possibly because some judges rated the novelty when compared to the headline. The middle example in table 5 is one of the best scored results on novelty.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.3"
            },
            {
                "text": "About a half of the results of each method is just not funny. On the other hand, the proportion of clearly funny results is between 19% and 26%, which, considering that this criteria is not explicitly tackled, is not bad, and suggests that, indeed, humour can be a consequence of this word-play. All the examples in table 5 have average funniness of 2 or higher. In the first, funniness is more subjective, but the result may suggest too much promiscuity between banks and the government. The last two make unexpected associations, like people with a herd, or honey / love with despair.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Evaluation",
                "sec_num": "4.3"
            },
            {
                "text": "Aiming at the amplification of news stories, we explored a set of automatic methods for making headlines more creative and appealing, with the exploitation of well-known expressions and word embeddings. When tested with Portuguese news headlines, human opinions suggest that our goals were achieved with relative success. Results are somewhat related to input headlines, especially if known expressions are adapted, and not used directly, which also increases novelty, intimately related to creativity. For the best adaptation methods, about a third of the results was clearly related to the headline. Although, in a few cases, humour was an indirect consequence, most of the outputs were not so funny. In the future, funniness may benefit from ranking candidate results with a humour classifier, based on different humour-relevant features, such as the one recently proposed for Portuguese (Clem\u00eancio et al., 2019) .",
                "cite_spans": [
                    {
                        "start": 891,
                        "end": 915,
                        "text": "(Clem\u00eancio et al., 2019)",
                        "ref_id": "BIBREF11"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "5"
            },
            {
                "text": "All the methods are unsupervised and exploring supervision was never our goal, mainly because the lack of training data, e.g., different Portuguese headlines for the same news scored according to their creativity, possibly also including some created specifically for this purpose. While such a dataset is not available for Portuguese, for English, a crowdsourced corpus of 15k original news headlines and their manually-edited funny versions was recently presented (Hossain et al., 2019) , making thus room for a data-driven approach for our task.",
                "cite_spans": [
                    {
                        "start": 466,
                        "end": 488,
                        "text": "(Hossain et al., 2019)",
                        "ref_id": "BIBREF23"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "5"
            },
            {
                "text": "Another observation of our work was that, in this context, methods that consider exclusively the surface text retrieve proverbs that are perceived to be more related. This happens because recommended proverbs tend to use some words of the input, immediately suggesting some relation. Although state-of-the-art BERT embeddings would be better at representing sentence meanings, they are not apt to deal with the figurative language in proverbs, at least if they are not fine-tuned for this, or possibly trained only for the target language. In the future, it would be interesting to make a deeper study on STS and figurative language.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "5"
            },
            {
                "text": "The proposed methods can be integrated in a tool for suggesting alternative headlines that are still related to a story, possibly useful for journalists. Even if not all results have the necessary quality, different options may be produced (e.g., the top-n for each method), faster than if it were a human, who may still make the final decision or additional adaptations. To some extent, enabling human interaction would make such a tool co-creative, as other interactive systems developed for writing stories (Roemmele and Gordon, 2015) , song lyrics (Watanabe et al., 2017) or poetry , among others.",
                "cite_spans": [
                    {
                        "start": 510,
                        "end": 537,
                        "text": "(Roemmele and Gordon, 2015)",
                        "ref_id": "BIBREF30"
                    },
                    {
                        "start": 552,
                        "end": 575,
                        "text": "(Watanabe et al., 2017)",
                        "ref_id": "BIBREF35"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "5"
            },
            {
                "text": "Finally, this work should also contribute to the development of more creative chatbots, e.g., capable of providing creative responses to out-ofdomain interactions. In the meantime, the TECo Twitterbot 7 is already following several Portuguese newspapers and retweeting some of their publications together with comments produced by the proposed methods.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "5"
            },
            {
                "text": "https://newsapi.org/",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "natura.di.uminho.pt/wiki/doku.php 3 https://fasttext.cc/ 4 github.com/google-research/bert 5 github.com/hanxiao/bert-as-service",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "https://www.imdb.com/interfaces/",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "SemEval-2012 task 6: A pilot on semantic textual similarity",
                "authors": [
                    {
                        "first": "Eneko",
                        "middle": [],
                        "last": "Agirre",
                        "suffix": ""
                    },
                    {
                        "first": "Mona",
                        "middle": [],
                        "last": "Diab",
                        "suffix": ""
                    },
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Cer",
                        "suffix": ""
                    },
                    {
                        "first": "Aitor",
                        "middle": [],
                        "last": "Gonzalez-Agirre",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of 1st Joint Conference on Lexical and Computational Semantics",
                "volume": "1",
                "issue": "",
                "pages": "385--393",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. SemEval-2012 task 6: A pi- lot on semantic textual similarity. In Proceedings of 1st Joint Conference on Lexical and Computa- tional Semantics-Vol. 1: Proceedings of main con- ference and shared task, and Vol. 2: Proceedings of Sixth International Workshop on Semantic Eval- uation, pages 385-393. Association for Computa- tional Linguistics.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Quote recommendation for dialogs and writings",
                "authors": [
                    {
                        "first": "Yeonchan",
                        "middle": [],
                        "last": "Ahn",
                        "suffix": ""
                    },
                    {
                        "first": "Hanbit",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    },
                    {
                        "first": "Heesik",
                        "middle": [],
                        "last": "Jeon",
                        "suffix": ""
                    },
                    {
                        "first": "Seungdo",
                        "middle": [],
                        "last": "Ha",
                        "suffix": ""
                    },
                    {
                        "first": "Sang-Goo",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of 3rd Workshop on New Trends in Content-Based Recommender Systems, CBRecSys@ RecSys",
                "volume": "",
                "issue": "",
                "pages": "39--42",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Yeonchan Ahn, Hanbit Lee, Heesik Jeon, Seungdo Ha, and Sang-goo Lee. 2016. Quote recommendation for dialogs and writings. In Proceedings of 3rd Workshop on New Trends in Content-Based Recom- mender Systems, CBRecSys@ RecSys, pages 39-42.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "TextoEmContexto ating colourful and factual multilingual news headlines",
                "authors": [
                    {
                        "first": "Khalid",
                        "middle": [],
                        "last": "Alnajjar",
                        "suffix": ""
                    },
                    {
                        "first": "Leo",
                        "middle": [],
                        "last": "Lepp\u00e4nen",
                        "suffix": ""
                    },
                    {
                        "first": "Hannu",
                        "middle": [],
                        "last": "Toivonen",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of 10th International Conference on Computational Creativity, ICCC 2019",
                "volume": "",
                "issue": "",
                "pages": "258--265",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Khalid Alnajjar, Leo Lepp\u00e4nen, and Hannu Toivonen. 2019. No time like the present: Methods for gener- 7 https://twitter.com/TextoEmContexto ating colourful and factual multilingual news head- lines. In Proceedings of 10th International Con- ference on Computational Creativity, ICCC 2019, pages 258-265. Association for Computational Cre- ativity.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Computational generation of slogans",
                "authors": [
                    {
                        "first": "Khalid",
                        "middle": [],
                        "last": "Alnajjar",
                        "suffix": ""
                    },
                    {
                        "first": "Hannu",
                        "middle": [],
                        "last": "Toivonen",
                        "suffix": ""
                    }
                ],
                "year": 2020,
                "venue": "Natural Language Engineering",
                "volume": "",
                "issue": "",
                "pages": "1--33",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Khalid Alnajjar and Hannu Toivonen. 2020. Compu- tational generation of slogans. Natural Language Engineering, pages 1-33.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "Luke, i am your father: dealing with out-of-domain requests by using movies subtitles",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Ameixa",
                        "suffix": ""
                    },
                    {
                        "first": "Luisa",
                        "middle": [],
                        "last": "Coheur",
                        "suffix": ""
                    },
                    {
                        "first": "Pedro",
                        "middle": [],
                        "last": "Fialho",
                        "suffix": ""
                    },
                    {
                        "first": "Paulo",
                        "middle": [],
                        "last": "Quaresma",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "International Conference on Intelligent Virtual Agents",
                "volume": "",
                "issue": "",
                "pages": "13--21",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David Ameixa, Luisa Coheur, Pedro Fialho, and Paulo Quaresma. 2014. Luke, i am your father: dealing with out-of-domain requests by using movies sub- titles. In International Conference on Intelligent Vir- tual Agents, pages 13-21. Springer.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Headline generation based on statistical translation",
                "authors": [
                    {
                        "first": "Michele",
                        "middle": [],
                        "last": "Banko",
                        "suffix": ""
                    },
                    {
                        "first": "O",
                        "middle": [],
                        "last": "Vibhu",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [
                            "J"
                        ],
                        "last": "Mittal",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Witbrock",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "318--325",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Michele Banko, Vibhu O Mittal, and Michael J Wit- brock. 2000. Headline generation based on statist- ical translation. In Proceedings of the 38th Annual Meeting of the Association for Computational Lin- guistics, pages 318-325.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Text transformation via constraints and word embedding",
                "authors": [
                    {
                        "first": "Benjamin",
                        "middle": [],
                        "last": "Bay",
                        "suffix": ""
                    },
                    {
                        "first": "Paul",
                        "middle": [],
                        "last": "Bodily",
                        "suffix": ""
                    },
                    {
                        "first": "Dan",
                        "middle": [],
                        "last": "Ventura",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of 8th International Conference on Computational Creativity",
                "volume": "",
                "issue": "",
                "pages": "49--56",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Benjamin Bay, Paul Bodily, and Dan Ventura. 2017. Text transformation via constraints and word em- bedding. In Proceedings of 8th International Con- ference on Computational Creativity, ICCC 2017, pages 49-56.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Computational rules for generating punning riddles",
                "authors": [
                    {
                        "first": "Kim",
                        "middle": [],
                        "last": "Binsted",
                        "suffix": ""
                    },
                    {
                        "first": "Graeme",
                        "middle": [],
                        "last": "Ritchie",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "Humor: International Journal of Humor Research",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kim Binsted and Graeme Ritchie. 1997. Computa- tional rules for generating punning riddles. Humor: International Journal of Humor Research.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Enriching word vectors with subword information",
                "authors": [
                    {
                        "first": "Piotr",
                        "middle": [],
                        "last": "Bojanowski",
                        "suffix": ""
                    },
                    {
                        "first": "Edouard",
                        "middle": [],
                        "last": "Grave",
                        "suffix": ""
                    },
                    {
                        "first": "Armand",
                        "middle": [],
                        "last": "Joulin",
                        "suffix": ""
                    },
                    {
                        "first": "Tomas",
                        "middle": [],
                        "last": "Mikolov",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Transactions of the Association for Computational Linguistics",
                "volume": "5",
                "issue": "",
                "pages": "135--146",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associ- ation for Computational Linguistics, 5:135-146.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation",
                "authors": [
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Cer",
                        "suffix": ""
                    },
                    {
                        "first": "Mona",
                        "middle": [],
                        "last": "Diab",
                        "suffix": ""
                    },
                    {
                        "first": "Eneko",
                        "middle": [],
                        "last": "Agirre",
                        "suffix": ""
                    },
                    {
                        "first": "Inigo",
                        "middle": [],
                        "last": "Lopez-Gazpio",
                        "suffix": ""
                    },
                    {
                        "first": "Lucia",
                        "middle": [],
                        "last": "Specia",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of 11th International Workshop on Semantic Evaluation (SemEval-2017)",
                "volume": "",
                "issue": "",
                "pages": "1--14",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Daniel Cer, Mona Diab, Eneko Agirre, Inigo Lopez- Gazpio, and Lucia Specia. 2017. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of 11th International Workshop on Semantic Eval- uation (SemEval-2017), pages 1-14. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "A chart generation system for topical meaningful metrical poetry",
                "authors": [
                    {
                        "first": "Berty",
                        "middle": [],
                        "last": "Chrismartin",
                        "suffix": ""
                    },
                    {
                        "first": "Ruli",
                        "middle": [],
                        "last": "Manurung",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of The 6th International Conference on Computational Creativity, ICCC 2015",
                "volume": "",
                "issue": "",
                "pages": "308--314",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Berty Chrismartin and Ruli Manurung. 2015. A chart generation system for topical meaningful metrical poetry. In Proceedings of The 6th International Con- ference on Computational Creativity, ICCC 2015, pages 308-314, Park City, UT, USA.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Recognizing humor in Portuguese: First steps",
                "authors": [
                    {
                        "first": "Andr\u00e9",
                        "middle": [],
                        "last": "Clem\u00eancio",
                        "suffix": ""
                    },
                    {
                        "first": "Ana",
                        "middle": [],
                        "last": "Alves",
                        "suffix": ""
                    },
                    {
                        "first": "Hugo Gon\u00e7alo",
                        "middle": [],
                        "last": "Oliveira",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of 19th EPIA Conference on Artificial Intelligence",
                "volume": "2019",
                "issue": "",
                "pages": "744--756",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Andr\u00e9 Clem\u00eancio, Ana Alves, and Hugo Gon\u00e7alo Oli- veira. 2019. Recognizing humor in Portuguese: First steps. In Proceedings of 19th EPIA Conference on Artificial Intelligence, EPIA 2019, Vila Real, Por- tugal, September 3-6, 2019, Part II, volume 11805 of LNCS/LNAI, pages 744-756. Springer.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Full FACE poetry generation",
                "authors": [
                    {
                        "first": "Simon",
                        "middle": [],
                        "last": "Colton",
                        "suffix": ""
                    },
                    {
                        "first": "Jacob",
                        "middle": [],
                        "last": "Goodwin",
                        "suffix": ""
                    },
                    {
                        "first": "Tony",
                        "middle": [],
                        "last": "Veale",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of 3rd International Conference on Computational Creativity (ICCC)",
                "volume": "",
                "issue": "",
                "pages": "95--102",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Simon Colton, Jacob Goodwin, and Tony Veale. 2012. Full FACE poetry generation. In Proceedings of 3rd International Conference on Computational Creativ- ity (ICCC), pages 95-102, Dublin, Ireland.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Computational creativity: the final frontier?",
                "authors": [
                    {
                        "first": "Simon",
                        "middle": [],
                        "last": "Colton",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Geraint",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Wiggins",
                        "suffix": ""
                    }
                ],
                "year": 2012,
                "venue": "Proceedings of the 20th European Conference on Artificial Intelligence",
                "volume": "",
                "issue": "",
                "pages": "21--26",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Simon Colton and Geraint A Wiggins. 2012. Computa- tional creativity: the final frontier? In Proceedings of the 20th European Conference on Artificial Intel- ligence, pages 21-26. IOS Press.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
                "authors": [
                    {
                        "first": "Jacob",
                        "middle": [],
                        "last": "Devlin",
                        "suffix": ""
                    },
                    {
                        "first": "Ming-Wei",
                        "middle": [],
                        "last": "Chang",
                        "suffix": ""
                    },
                    {
                        "first": "Kenton",
                        "middle": [],
                        "last": "Lee",
                        "suffix": ""
                    },
                    {
                        "first": "Kristina",
                        "middle": [],
                        "last": "Toutanova",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "1",
                "issue": "",
                "pages": "4171--4186",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In Proceedings of 2019 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171- 4186. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Improving NLTK for processing Portuguese",
                "authors": [
                    {
                        "first": "Jo\u00e3o",
                        "middle": [],
                        "last": "Ferreira",
                        "suffix": ""
                    },
                    {
                        "first": "Hugo",
                        "middle": [
                            "Gon\u00e7alo"
                        ],
                        "last": "Oliveira",
                        "suffix": ""
                    },
                    {
                        "first": "Ricardo",
                        "middle": [],
                        "last": "Rodrigues",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of 8th Symposium on Languages, Applications and Technologies",
                "volume": "74",
                "issue": "",
                "pages": "1--18",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jo\u00e3o Ferreira, Hugo Gon\u00e7alo Oliveira, and Ricardo Rodrigues. 2019. Improving NLTK for processing Portuguese. In Proceedings of 8th Symposium on Languages, Applications and Technologies (SLATE 2019), volume 74 of OASIcs, pages 18:1-18:9. Schloss Dagstuhl.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Slogans are not forever: Adapting linguistic expressions to the news",
                "authors": [
                    {
                        "first": "Lorenzo",
                        "middle": [],
                        "last": "Gatti",
                        "suffix": ""
                    },
                    {
                        "first": "Marco",
                        "middle": [],
                        "last": "G\u00f6zde\u00f6zbal",
                        "suffix": ""
                    },
                    {
                        "first": "Oliviero",
                        "middle": [],
                        "last": "Guerini",
                        "suffix": ""
                    },
                    {
                        "first": "Carlo",
                        "middle": [],
                        "last": "Stock",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Strapparava",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings 24th International Joint Conference on Artificial Intelligence, IJCAI 2015",
                "volume": "",
                "issue": "",
                "pages": "2452--2458",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lorenzo Gatti, G\u00f6zde\u00d6zbal, Marco Guerini, Oliviero Stock, and Carlo Strapparava. 2015. Slogans are not forever: Adapting linguistic expressions to the news. In Proceedings 24th International Joint Conference on Artificial Intelligence, IJCAI 2015, pages 2452- 2458. AAAI Press.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "The long path to narrative generation",
                "authors": [
                    {
                        "first": "Pablo",
                        "middle": [],
                        "last": "Gerv\u00e1s",
                        "suffix": ""
                    },
                    {
                        "first": "Eugenio",
                        "middle": [],
                        "last": "Concepci\u00f3n",
                        "suffix": ""
                    },
                    {
                        "first": "Carlos",
                        "middle": [],
                        "last": "Le\u00f3n",
                        "suffix": ""
                    },
                    {
                        "first": "Gonzalo",
                        "middle": [],
                        "last": "M\u00e9ndez",
                        "suffix": ""
                    },
                    {
                        "first": "Pablo",
                        "middle": [],
                        "last": "Delatorre",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "IBM Journal of Research and Development",
                "volume": "63",
                "issue": "1",
                "pages": "8--9",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Pablo Gerv\u00e1s, Eugenio Concepci\u00f3n, Carlos Le\u00f3n, Gonzalo M\u00e9ndez, and Pablo Delatorre. 2019. The long path to narrative generation. IBM Journal of Research and Development, 63(1):8-1.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "O Poeta Artificial 2.0: Increasing meaningfulness in a poetry generation Twitter bot",
                "authors": [
                    {
                        "first": "",
                        "middle": [],
                        "last": "Hugo Gon\u00e7alo Oliveira",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the Workshop on Computational Creativity in Natural Language Generation (CC-NLG 2017)",
                "volume": "",
                "issue": "",
                "pages": "11--20",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hugo Gon\u00e7alo Oliveira. 2017. O Poeta Artificial 2.0: Increasing meaningfulness in a poetry generation Twitter bot. In Proceedings of the Workshop on Computational Creativity in Natural Language Gen- eration (CC-NLG 2017), pages 11-20, Santiago de Compostela, Spain. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "One does not simply produce funny memes! -explorations on the automatic generation of Internet humor",
                "authors": [
                    {
                        "first": "Diogo",
                        "middle": [],
                        "last": "Hugo Gon\u00e7alo Oliveira",
                        "suffix": ""
                    },
                    {
                        "first": "Alexandre",
                        "middle": [],
                        "last": "Costa",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Pinto",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of 7th International Conference on Computational Creativity, ICCC 2016",
                "volume": "",
                "issue": "",
                "pages": "238--245",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hugo Gon\u00e7alo Oliveira, Diogo Costa, and Alexandre Pinto. 2016. One does not simply produce funny memes! -explorations on the automatic genera- tion of Internet humor. In Proceedings of 7th In- ternational Conference on Computational Creativ- ity, ICCC 2016, pages 238-245, Paris, France.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Co-PoeTryMe: a co-creative interface for the composition of poetry",
                "authors": [
                    {
                        "first": "Tiago",
                        "middle": [],
                        "last": "Hugo Gon\u00e7alo Oliveira",
                        "suffix": ""
                    },
                    {
                        "first": "Ana",
                        "middle": [],
                        "last": "Mendes",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Boavida",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of 10th International Conference on Natural Language Generation",
                "volume": "",
                "issue": "",
                "pages": "70--71",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hugo Gon\u00e7alo Oliveira, Tiago Mendes, and Ana Boavida. 2017. Co-PoeTryMe: a co-creative inter- face for the composition of poetry. In Proceedings of 10th International Conference on Natural Lan- guage Generation, INLG 2017, pages 70-71, San- tiago de Compostela, Spain. ACL Press.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Exploring lexical-semantic knowledge in the generation of novel riddles in Portuguese",
                "authors": [
                    {
                        "first": "Gon\u00e7alo",
                        "middle": [],
                        "last": "Hugo",
                        "suffix": ""
                    },
                    {
                        "first": "Ricardo",
                        "middle": [],
                        "last": "Oliveira",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Rodrigues",
                        "suffix": ""
                    }
                ],
                "year": 2018,
                "venue": "Proceedings of the 3rd Workshop on Computational Creativity in Natural Language Generation",
                "volume": "",
                "issue": "",
                "pages": "17--25",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Hugo Gon\u00e7alo Oliveira and Ricardo Rodrigues. 2018. Exploring lexical-semantic knowledge in the gener- ation of novel riddles in Portuguese. In Proceed- ings of the 3rd Workshop on Computational Creativ- ity in Natural Language Generation, CC-NLG 2018, pages 17-25, Tilburg, The Netherlands. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Portuguese word embeddings: Evaluating on word analogies and natural language tasks",
                "authors": [
                    {
                        "first": "Nathan",
                        "middle": [
                            "S"
                        ],
                        "last": "Hartmann",
                        "suffix": ""
                    },
                    {
                        "first": "Erick",
                        "middle": [
                            "R"
                        ],
                        "last": "Fonseca",
                        "suffix": ""
                    },
                    {
                        "first": "Christopher",
                        "middle": [
                            "D"
                        ],
                        "last": "Shulby",
                        "suffix": ""
                    },
                    {
                        "first": "Marcos",
                        "middle": [
                            "V"
                        ],
                        "last": "Treviso",
                        "suffix": ""
                    },
                    {
                        "first": "J\u00e9ssica",
                        "middle": [
                            "S"
                        ],
                        "last": "Rodrigues",
                        "suffix": ""
                    },
                    {
                        "first": "Sandra",
                        "middle": [
                            "M"
                        ],
                        "last": "Alu\u00edsio",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of 11th Brazilian Symposium in Information and Human Language Technology",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Nathan S. Hartmann, Erick R. Fonseca, Christopher D. Shulby, Marcos V. Treviso, J\u00e9ssica S. Rodrigues, and Sandra M. Alu\u00edsio. 2017. Portuguese word em- beddings: Evaluating on word analogies and natural language tasks. In Proceedings of 11th Brazilian Symposium in Information and Human Language Technology (STIL 2017).",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "President Vows to Cut <Taxes> Hair\": Dataset and analysis of creative text editing for humorous headlines",
                "authors": [
                    {
                        "first": "Nabil",
                        "middle": [],
                        "last": "Hossain",
                        "suffix": ""
                    },
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Krumm",
                        "suffix": ""
                    },
                    {
                        "first": "Michael",
                        "middle": [],
                        "last": "Gamon",
                        "suffix": ""
                    }
                ],
                "year": 2019,
                "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
                "volume": "1",
                "issue": "",
                "pages": "133--142",
                "other_ids": {
                    "DOI": [
                        "10.18653/v1/N19-1012"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Nabil Hossain, John Krumm, and Michael Gamon. 2019. \"President Vows to Cut <Taxes> Hair\": Dataset and analysis of creative text editing for hu- morous headlines. In Proceedings of the 2019 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long and Short Pa- pers), pages 133-142, Minneapolis, Minnesota. As- sociation for Computational Linguistics.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "Nltk: The natural language toolkit",
                "authors": [
                    {
                        "first": "Edward",
                        "middle": [],
                        "last": "Loper",
                        "suffix": ""
                    },
                    {
                        "first": "Steven",
                        "middle": [],
                        "last": "Bird",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics",
                "volume": "",
                "issue": "",
                "pages": "63--70",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Edward Loper and Steven Bird. 2002. Nltk: The nat- ural language toolkit. In Proceedings of the ACL-02 Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Compu- tational Linguistics, pages 63-70.",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "Linguistic regularities in continuous space word representations",
                "authors": [
                    {
                        "first": "Tomas",
                        "middle": [],
                        "last": "Mikolov",
                        "suffix": ""
                    },
                    {
                        "first": "Yih",
                        "middle": [],
                        "last": "Wen-Tau",
                        "suffix": ""
                    },
                    {
                        "first": "Geoffrey",
                        "middle": [],
                        "last": "Zweig",
                        "suffix": ""
                    }
                ],
                "year": 2013,
                "venue": "Proceedings of 2013 Conference of the North American Chapter of the ACL: Human Language Technologies",
                "volume": "",
                "issue": "",
                "pages": "746--751",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Proceedings of 2013 Con- ference of the North American Chapter of the ACL: Human Language Technologies, pages 746-751. As- sociation for Computational Linguistics.",
                "links": null
            },
            "BIBREF26": {
                "ref_id": "b26",
                "title": "Scikit-learn: Machine learning in Python",
                "authors": [
                    {
                        "first": "F",
                        "middle": [],
                        "last": "Pedregosa",
                        "suffix": ""
                    },
                    {
                        "first": "G",
                        "middle": [],
                        "last": "Varoquaux",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Gramfort",
                        "suffix": ""
                    },
                    {
                        "first": "V",
                        "middle": [],
                        "last": "Michel",
                        "suffix": ""
                    },
                    {
                        "first": "B",
                        "middle": [],
                        "last": "Thirion",
                        "suffix": ""
                    },
                    {
                        "first": "O",
                        "middle": [],
                        "last": "Grisel",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Blondel",
                        "suffix": ""
                    },
                    {
                        "first": "P",
                        "middle": [],
                        "last": "Prettenhofer",
                        "suffix": ""
                    },
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Weiss",
                        "suffix": ""
                    },
                    {
                        "first": "V",
                        "middle": [],
                        "last": "Dubourg",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Vanderplas",
                        "suffix": ""
                    },
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Passos",
                        "suffix": ""
                    },
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Cournapeau",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Brucher",
                        "suffix": ""
                    },
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Perrot",
                        "suffix": ""
                    },
                    {
                        "first": "E",
                        "middle": [],
                        "last": "Duchesnay",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Journal of Machine Learning Research",
                "volume": "12",
                "issue": "",
                "pages": "2825--2830",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duch- esnay. 2011. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825-2830.",
                "links": null
            },
            "BIBREF27": {
                "ref_id": "b27",
                "title": "Glove: Global vectors for word representation",
                "authors": [
                    {
                        "first": "Jeffrey",
                        "middle": [],
                        "last": "Pennington",
                        "suffix": ""
                    },
                    {
                        "first": "Richard",
                        "middle": [],
                        "last": "Socher",
                        "suffix": ""
                    },
                    {
                        "first": "Christopher",
                        "middle": [],
                        "last": "Manning",
                        "suffix": ""
                    }
                ],
                "year": 2014,
                "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
                "volume": "",
                "issue": "",
                "pages": "1532--1543",
                "other_ids": {
                    "DOI": [
                        "10.3115/v1/D14-1162"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.",
                "links": null
            },
            "BIBREF28": {
                "ref_id": "b28",
                "title": "A computational lexicon of Portuguese for automatic text parsing",
                "authors": [
                    {
                        "first": "Elisabete",
                        "middle": [],
                        "last": "Ranchhod",
                        "suffix": ""
                    },
                    {
                        "first": "Cristina",
                        "middle": [],
                        "last": "Mota",
                        "suffix": ""
                    },
                    {
                        "first": "Jorge",
                        "middle": [],
                        "last": "Baptista",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks",
                "volume": "",
                "issue": "",
                "pages": "45--50",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Elisabete Ranchhod, Cristina Mota, and Jorge Baptista. 1999. A computational lexicon of Portuguese for automatic text parsing. In Proceedings of SIGLEX99 Workshop: Standardizing Lexical Re- sources. Association for Computational Linguistics. Radim\u0158eh\u016f\u0159ek and Petr Sojka. 2010. Software Framework for Topic Modelling with Large Cor- pora. In Proceedings of the LREC 2010 Workshop on New Challenges for NLP Frameworks, pages 45- 50, Valletta, Malta. ELRA. http://is.muni.cz/ publication/884893/en.",
                "links": null
            },
            "BIBREF29": {
                "ref_id": "b29",
                "title": "CETEMP\u00fablico: Um corpus de grandes dimens\u00f5es de linguagem jornal\u00edstica portuguesa",
                "authors": [
                    {
                        "first": "Paulo",
                        "middle": [
                            "Alexandre"
                        ],
                        "last": "Rocha",
                        "suffix": ""
                    },
                    {
                        "first": "Diana",
                        "middle": [],
                        "last": "Santos",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "V Encontro para o processamento computacional da l\u00edngua portuguesa escrita e falada (PROPOR 2000)",
                "volume": "",
                "issue": "",
                "pages": "131--140",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Paulo Alexandre Rocha and Diana Santos. 2000. CETEMP\u00fablico: Um corpus de grandes dimens\u00f5es de linguagem jornal\u00edstica portuguesa. In V Encon- tro para o processamento computacional da l\u00edngua portuguesa escrita e falada (PROPOR 2000), pages 131-140, S\u00e3o Paulo. ICMC/USP.",
                "links": null
            },
            "BIBREF30": {
                "ref_id": "b30",
                "title": "Creative help: a story writing assistant",
                "authors": [
                    {
                        "first": "Melissa",
                        "middle": [],
                        "last": "Roemmele",
                        "suffix": ""
                    },
                    {
                        "first": "Andrew S",
                        "middle": [],
                        "last": "Gordon",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "International Conference on Interactive Digital Storytelling",
                "volume": "",
                "issue": "",
                "pages": "81--92",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Melissa Roemmele and Andrew S Gordon. 2015. Cre- ative help: a story writing assistant. In Interna- tional Conference on Interactive Digital Storytelling, pages 81-92. Springer.",
                "links": null
            },
            "BIBREF31": {
                "ref_id": "b31",
                "title": "Neural responding machine for short-text conversation",
                "authors": [
                    {
                        "first": "Lifeng",
                        "middle": [],
                        "last": "Shang",
                        "suffix": ""
                    },
                    {
                        "first": "Zhengdong",
                        "middle": [],
                        "last": "Lu",
                        "suffix": ""
                    },
                    {
                        "first": "Hang",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    }
                ],
                "year": 2015,
                "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
                "volume": "1",
                "issue": "",
                "pages": "1577--1586",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conver- sation. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Nat- ural Language Processing (Volume 1: Long Papers), pages 1577-1586.",
                "links": null
            },
            "BIBREF32": {
                "ref_id": "b32",
                "title": "Neural headline generation on abstract meaning representation",
                "authors": [
                    {
                        "first": "Jun",
                        "middle": [],
                        "last": "Sho Takase",
                        "suffix": ""
                    },
                    {
                        "first": "Naoaki",
                        "middle": [],
                        "last": "Suzuki",
                        "suffix": ""
                    },
                    {
                        "first": "Tsutomu",
                        "middle": [],
                        "last": "Okazaki",
                        "suffix": ""
                    },
                    {
                        "first": "Masaaki",
                        "middle": [],
                        "last": "Hirao",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Nagata",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Proceedings of the 2016 conference on empirical methods in natural language processing",
                "volume": "",
                "issue": "",
                "pages": "1054--1059",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sho Takase, Jun Suzuki, Naoaki Okazaki, Tsutomu Hirao, and Masaaki Nagata. 2016. Neural head- line generation on abstract meaning representation. In Proceedings of the 2016 conference on empir- ical methods in natural language processing, pages 1054-1059.",
                "links": null
            },
            "BIBREF33": {
                "ref_id": "b33",
                "title": "Computational generation and dissection of lexical replacement humor",
                "authors": [
                    {
                        "first": "Alessandro",
                        "middle": [],
                        "last": "Valitutti",
                        "suffix": ""
                    },
                    {
                        "first": "Antoine",
                        "middle": [],
                        "last": "Doucet",
                        "suffix": ""
                    },
                    {
                        "first": "Jukka",
                        "middle": [
                            "M"
                        ],
                        "last": "Toivanen",
                        "suffix": ""
                    },
                    {
                        "first": "Hannu",
                        "middle": [],
                        "last": "Toivonen",
                        "suffix": ""
                    }
                ],
                "year": 2016,
                "venue": "Natural Language Engineering",
                "volume": "22",
                "issue": "5",
                "pages": "727--749",
                "other_ids": {
                    "DOI": [
                        "10.1017/S1351324915000145"
                    ]
                },
                "num": null,
                "urls": [],
                "raw_text": "Alessandro Valitutti, Antoine Doucet, Jukka M. Toivanen, and Hannu Toivonen. 2016. Compu- tational generation and dissection of lexical re- placement humor. Natural Language Engineering, 22(5):727-749.",
                "links": null
            },
            "BIBREF34": {
                "ref_id": "b34",
                "title": "2017. I read the news today, oh boy",
                "authors": [
                    {
                        "first": "Tony",
                        "middle": [],
                        "last": "Veale",
                        "suffix": ""
                    },
                    {
                        "first": "Hanyang",
                        "middle": [],
                        "last": "Chen",
                        "suffix": ""
                    },
                    {
                        "first": "Guofu",
                        "middle": [],
                        "last": "Li",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "International Conference on Distributed, Ambient, and Pervasive Interactions",
                "volume": "",
                "issue": "",
                "pages": "696--709",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Tony Veale, Hanyang Chen, and Guofu Li. 2017. I read the news today, oh boy. In International Conference on Distributed, Ambient, and Pervasive Interactions, pages 696-709. Springer.",
                "links": null
            },
            "BIBREF35": {
                "ref_id": "b35",
                "title": "Lyrisys: An interactive support system for writing lyrics based on topic transition",
                "authors": [
                    {
                        "first": "Kento",
                        "middle": [],
                        "last": "Watanabe",
                        "suffix": ""
                    },
                    {
                        "first": "Yuichiroh",
                        "middle": [],
                        "last": "Matsubayashi",
                        "suffix": ""
                    },
                    {
                        "first": "Kentaro",
                        "middle": [],
                        "last": "Inui",
                        "suffix": ""
                    },
                    {
                        "first": "Tomoyasu",
                        "middle": [],
                        "last": "Nakano",
                        "suffix": ""
                    },
                    {
                        "first": "Satoru",
                        "middle": [],
                        "last": "Fukayama",
                        "suffix": ""
                    },
                    {
                        "first": "Masataka",
                        "middle": [],
                        "last": "Goto",
                        "suffix": ""
                    }
                ],
                "year": 2017,
                "venue": "Proceedings of the 22nd international conference on intelligent user interfaces",
                "volume": "",
                "issue": "",
                "pages": "559--563",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kento Watanabe, Yuichiroh Matsubayashi, Kentaro Inui, Tomoyasu Nakano, Satoru Fukayama, and Masataka Goto. 2017. Lyrisys: An interactive sup- port system for writing lyrics based on topic trans- ition. In Proceedings of the 22nd international con- ference on intelligent user interfaces, pages 559- 563.",
                "links": null
            }
        },
        "ref_entries": {
            "TABREF1": {
                "text": "Human evaluation of recommendation approaches.",
                "num": null,
                "content": "<table><tr><td>Headline</td><td>Proverb</td><td colspan=\"3\">Method Rel Fun</td></tr><tr><td>Mal\u00e1sia devolve 150 contentores ilegais de lixo a pa\u00edses</td><td>Quem faz de si lixo, pisam-no as galin-</td><td>TFIDF</td><td>4</td><td>3.5</td></tr><tr><td>subdesenvolvidos (Malaysia returns 150 illegal trash containers to</td><td>has (Whoever makes themselves trash, will be</td><td/><td/><td/></tr><tr><td>underdeveloped countries)</td><td>trampled by the chicken)</td><td/><td/><td/></tr><tr><td>Tempestade 'Gl\u00f3ria' fez 12 mortos em Espanha. Gov-</td><td>A culpa morre solteira (Guilt dies single)</td><td>TFIDF</td><td>4</td><td>3.25</td></tr><tr><td>erno culpa altera\u00e7\u00f5es clim\u00e1ticas ('Gloria' storm made 12 cas-</td><td/><td/><td/><td/></tr><tr><td>ualities in Spain. Government blames climate change)</td><td/><td/><td/><td/></tr><tr><td>Ainda n\u00e3o\u00e9 demasiado tarde para salvarmos os oceanos</td><td>N\u00e3o deixe para amanh\u00e3 o que voc\u00ea</td><td>BERT</td><td>4</td><td>2.5</td></tr><tr><td>(It is not too late to save the oceans)</td><td>pode fazer hoje (Do not leave for tomorrow</td><td/><td/><td/></tr><tr><td/><td>what you can do today)</td><td/><td/><td/></tr><tr><td>Veredicto abre a porta a protec\u00e7\u00e3o para 'refugiados</td><td>Para tr\u00e1s mija a burra. (The female don-</td><td>Jaccard</td><td>2.5</td><td>4</td></tr><tr><td>clim\u00e1ticos' (Veredict opens door for protection to 'climate refugees')</td><td>key pisses backwards)</td><td/><td/><td/></tr></table>",
                "html": null,
                "type_str": "table"
            },
            "TABREF2": {
                "text": "",
                "num": null,
                "content": "<table/>",
                "html": null,
                "type_str": "table"
            },
            "TABREF4": {
                "text": "Running examples of the application of each adaptation method.",
                "num": null,
                "content": "<table/>",
                "html": null,
                "type_str": "table"
            },
            "TABREF5": {
                "text": "Vector Difference 34.6 35.6 29.8 2 17.0 35.4 47.6 2 53.4 27.0 19.6 1 Recommendation only 38.4 32.0 29.6 2 33.8 34.6 31.6 2 53.0 27.8 19,2 1",
                "num": null,
                "content": "<table><tr><td>Method</td><td>Relatedness (%) 1 2 3x</td><td>1</td><td>Novelty (%) 2 3x</td><td>Funniness (%) 1 2 3x</td></tr><tr><td colspan=\"2\">Final recommendation by TF-IDF</td><td/><td/><td/></tr><tr><td colspan=\"5\">Substitution 25.8 38.0 36.2 2 17.6 40.8 41.6 2 45.2 29.4 25.4 2</td></tr><tr><td colspan=\"5\">Analogy 29.0 37.8 33.2 2 17.4 36.4 46.2 2 44.6 29.8 25.6 2</td></tr><tr><td colspan=\"2\">Final recommendation by BERT</td><td/><td/><td/></tr><tr><td colspan=\"5\">Substitution 44.8 34.0 21.2 2 22.0 36.4 41.6 2 49.2 30.6 20.2 1</td></tr><tr><td colspan=\"5\">Analogy 38.8 36.6 24.6 2 20.0 33.8 46.2 2 52.0 28.2 19,8 1</td></tr><tr><td colspan=\"5\">Vector Difference 38.4 35.6 26.0 2 16.4 32.2 51.4 3 51.8 24.2 24.0 1</td></tr><tr><td colspan=\"5\">Recommendation only 52.5 27.8 19.8 1 22.0 35.0 43.0 2 46.0 29.8 24.2 2</td></tr></table>",
                "html": null,
                "type_str": "table"
            },
            "TABREF6": {
                "text": "Human evaluation of the adaptation and recommendation approaches.",
                "num": null,
                "content": "<table/>",
                "html": null,
                "type_str": "table"
            },
            "TABREF7": {
                "text": "Bancos dizem que as condi\u00e7\u00f5es das linhas de cr\u00e9dito foram definidas pelo governo (Banks claim that credit conditions were defined by the government) Dar a C\u00e9sar o que\u00e9 de C\u00e9sar e a Deus o qu\u00e9 e de Deus (To Caesar what is Caesar's and to God what is God's) Dar a governo o que\u00e9 de governo e a Deus o que\u00e9 de Deus (To the government what is the government's and to God what is God's)",
                "num": null,
                "content": "<table><tr><td>Method</td><td>Input</td><td>Original Expression</td><td>Output</td><td>Scores</td></tr><tr><td>Subs</td><td/><td/><td/><td>Rel= 3</td></tr><tr><td>+</td><td/><td/><td/><td>Nov= 2</td></tr><tr><td>TF-IDF</td><td/><td/><td/><td>Fun= 2.5</td></tr><tr><td>Analogy</td><td>Uma simples conversa gera</td><td>Uma ovelha m\u00e1 p\u00f5e o</td><td>Uma got\u00edcula suspensa</td><td>Rel= 2.25</td></tr><tr><td>+</td><td>got\u00edculas que podem ficar</td><td>rebanho a perder (A bad</td><td>p\u00f5e o rebanho a perder</td><td>Nov= 2.75</td></tr><tr><td>BERT</td><td>suspensas no ar at\u00e9 14 minutos</td><td>sheep makes the herd lose)</td><td>(A suspended droplet makes the</td><td>Fun= 2.5</td></tr><tr><td/><td>(A simple conversation generates droplets</td><td/><td>herd lose)</td><td/></tr><tr><td/><td>that can be suspended in the air for up to 14</td><td/><td/><td/></tr><tr><td/><td>minutes)</td><td/><td/><td/></tr><tr><td>VecDiff</td><td>Noivos desesperam por terminar</td><td>Ouro\u00e9 o que ouro vale</td><td>Mel desespera o que mel</td><td>Rel= 2.25</td></tr><tr><td>+</td><td>lua de mel, mesmo estando \"presos\"</td><td>(Gold is what gold is worth)</td><td>vale (Honey despairs honey is</td><td>Nov= 2.5</td></tr><tr><td>BERT</td><td>nas Maldivas (Newlyweds are desperate</td><td/><td>worth)</td><td>Fun= 2</td></tr><tr><td/><td>to end their honeymoon, even though they are</td><td/><td/><td/></tr><tr><td/><td>\"stuck\" in the Maldives)</td><td/><td/><td/></tr></table>",
                "html": null,
                "type_str": "table"
            },
            "TABREF8": {
                "text": "Examples of high-scored expressions according to the human judges.",
                "num": null,
                "content": "<table/>",
                "html": null,
                "type_str": "table"
            }
        }
    }
}