File size: 90,936 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
{
    "paper_id": "I11-1038",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T07:31:43.937518Z"
    },
    "title": "Fine-Grained Sentiment Analysis with Structural Features",
    "authors": [
        {
            "first": "C\u00e4cilia",
            "middle": [],
            "last": "Zirn",
            "suffix": "",
            "affiliation": {},
            "email": ""
        },
        {
            "first": "Mathias",
            "middle": [],
            "last": "Niepert",
            "suffix": "",
            "affiliation": {},
            "email": ""
        },
        {
            "first": "Heiner",
            "middle": [],
            "last": "Stuckenschmidt",
            "suffix": "",
            "affiliation": {},
            "email": ""
        },
        {
            "first": "Michael",
            "middle": [],
            "last": "Strube",
            "suffix": "",
            "affiliation": {},
            "email": "michael.strube@h-its.org"
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "Sentiment analysis is the problem of determining the polarity of a text with respect to a particular topic. For most applications, however, it is not only necessary to derive the polarity of a text as a whole but also to extract negative and positive utterances on a more finegrained level. Sentiment analysis systems working on the (sub-)sentence level, however, are difficult to develop since shorter textual segments rarely carry enough information to determine their polarity out of context. In this paper, therefore, we present a fully automatic framework for fine-grained sentiment analysis on the subsentence level combining multiple sentiment lexicons and neighborhood as well as discourse relations to overcome this problem. We use Markov logic to integrate polarity scores from different sentiment lexicons with information about relations between neighboring segments, and evaluate the approach on product reviews. The experiments show that the use of structural features improves the accuracy of polarity predictions achieving accuracy scores of up to 69%.",
    "pdf_parse": {
        "paper_id": "I11-1038",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "Sentiment analysis is the problem of determining the polarity of a text with respect to a particular topic. For most applications, however, it is not only necessary to derive the polarity of a text as a whole but also to extract negative and positive utterances on a more finegrained level. Sentiment analysis systems working on the (sub-)sentence level, however, are difficult to develop since shorter textual segments rarely carry enough information to determine their polarity out of context. In this paper, therefore, we present a fully automatic framework for fine-grained sentiment analysis on the subsentence level combining multiple sentiment lexicons and neighborhood as well as discourse relations to overcome this problem. We use Markov logic to integrate polarity scores from different sentiment lexicons with information about relations between neighboring segments, and evaluate the approach on product reviews. The experiments show that the use of structural features improves the accuracy of polarity predictions achieving accuracy scores of up to 69%.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Sentiment analysis systems have continuously improved the quality of polarity classifications of entire product reviews. For numerous real-world applications, however, classification on such a coarse level is not suitable. Even in their most enthusiastic reviews, users still tend to mention negative aspects of a particular product. Conversely, in very negative reviews there might still be mentions of several positive aspects of the product. Moreover, different opinions can even be uttered in the same sentence. Consider, for instance, the sentence \"Despite the pretty design I would never recommend it, because the sound quality is unacceptable\" which expresses both positive and negative opinions about a product. Thus, to determine both negative and positive utterances in product reviews, classification on the subsentence level is needed.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Sentiment Analysis on Subsentence Level. As basic classification unit for our fine-grained sentiment analysis system we choose discourse segments. There are various theories describing discourse, discourse segmentation and discourse relations. The most well-known theory aiming to describe some aspects of text coherence is the Rhetorical Structure Theory (RST) introduced by Mann and Thompson (1988) . According to this theory, every text consists of elementary segments that are connected by relations. Segments joined by a relation form a unit, which is itself connected to other segments. This leads to a hierarchical tree structure that spans over the whole text. The example sentence given above could be divided into the three segments s 1 = Despite the pretty design, s 2 = I would never recommend it and s 3 = because the sound quality is unacceptable, with a CONCESSION relation 1 holding between s 1 and s 2 and a CAUSE-EXPLANATION-EVIDENCE relation holding between s 2 and s 3 . As the segments form logical units, and parts of sentences bearing different polarities are contrastive and thus do not constitute a logical unit, we claim that the discourse segment level is appropriate for fine-grained sentiment analysis.",
                "cite_spans": [
                    {
                        "start": 385,
                        "end": 400,
                        "text": "Thompson (1988)",
                        "ref_id": "BIBREF16"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Integrating Neighborhood Relations. As discourse segments consist of only a few tokens, they rarely carry enough information to determine their polarity out of context. While it occurs that neighboring segments bear opposite polarities, like in the example given above, two segments following each other are mostly of the same polarity. Therefore, when determining the polarity of a discourse segment, we consider the polarity of the neighboring segments for the classification.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Leveraging Contrast Relations. Although mentioning positive and negative opinions next to each other constitutes a contrast, we cannot conclude that every contrast indicates a polarity change. We conducted a simple corpus study, focusing on the cue word but which is a strong indicator for contrast relations. Of all consecutive discourse segments connected by the word but, 40% express opposite and 60% express the same opinion. Of all the other discourse segment pairs, only 10% express differing opinions. From this experimental observation, we conclude that two neighboring segments not related by a contrast relation have a much higher probability of bearing opinions of the same polarity than segments connected by a contrast relation. In our experiments, we will investigate whether the distinction between CONTRAST and NO CONTRAST relations will improve fine-grained sentiment analysis.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Collective classification. The challenge of finegrained sentiment analysis is that shorter text segments pose a more difficult classification problem. There are various approaches to determining the polarity of text. One common approach is the look-up of terms in a sentiment lexicon with polarity scores. As discussed in the previous paragraphs, we claim that incorporating information about a segment's neighbors, the classification of small text segments can be improved on. Therefore, we simultaneously determine the most probable classification of all segments in a review. We use Markov logic to combine polarity scores from different sentiment lexicons with information about discourse relations between neighboring segments, and evaluate the method on product reviews.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1"
            },
            {
                "text": "Methods for fine-grained sentiment analysis are developed by Hu and Liu (2004) , Ding et al. (2008) and Popescu and Etzioni (2005) . While the approaches of the former two operate on the sentence level, the system of the latter - Popescu and Etzioni (2005) -extracts opinion phrases on the subsentence level for product features. Their approaches have in common that they first extract features of a product, like the size of a camera or its weight. Then, they look for opinion words describing these features. Finally, the polarity of these terms and, thus, of the feature is determined. An even finer-grained system is presented in Kessler and Nicolov (2009) . The approach aims at classifying both sentiment expressions as well as their targets using a rich set of linguistic features. However, they have not implemented the component that detects and analyses sentiment expressions, but focus on target detection. T\u00e4ckstr\u00f6m and McDonald (2011) combine fully and partially supervised structured conditional models for a joint classification of the polarity of whole reviews and the review's sentences.",
                "cite_spans": [
                    {
                        "start": 61,
                        "end": 78,
                        "text": "Hu and Liu (2004)",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 81,
                        "end": 99,
                        "text": "Ding et al. (2008)",
                        "ref_id": "BIBREF3"
                    },
                    {
                        "start": 104,
                        "end": 130,
                        "text": "Popescu and Etzioni (2005)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 230,
                        "end": 256,
                        "text": "Popescu and Etzioni (2005)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 634,
                        "end": 660,
                        "text": "Kessler and Nicolov (2009)",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 918,
                        "end": 947,
                        "text": "T\u00e4ckstr\u00f6m and McDonald (2011)",
                        "ref_id": "BIBREF26"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "An approach based on assumptions similar to our intuition to integrate discourse relations is described in Kim and Hovy (2006) where the authors label sentences as reasons for or against purchasing a product. The system makes use of conjunctions like \"and\" to infer polarities and applies a specific rule to sentences including the word \"but\": if no polarity can be identified for the clause containing \"but\", the polarity of the previous phrase is taken and negated. In our system, we incorporate this information using discourse relations.",
                "cite_spans": [
                    {
                        "start": 107,
                        "end": 126,
                        "text": "Kim and Hovy (2006)",
                        "ref_id": "BIBREF13"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "The impact of discourse relations for sentiment analysis is investigated in Asher et al. (2009) . The authors conduct a manual study in which they represent opinions in text as shallow semantic feature structures. These are combined to an overall opinion using hand-written rules based on manually annotated discourse relations. An interdependent classification scenario to determine polarity as well as discourse relations is presented in Somasundaran and . In their approach, text is modeled as opinion graphs including discourse information. In the authors try alternative machine learning scenarios with combinations of supervised and unsupervised methods for the same task. However, they do not determine discourse relations automatically but use manual annotations.",
                "cite_spans": [
                    {
                        "start": 76,
                        "end": 95,
                        "text": "Asher et al. (2009)",
                        "ref_id": "BIBREF0"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Related Work",
                "sec_num": "2"
            },
            {
                "text": "The basic idea of our approach is the integration of heterogeneous features such as polarity scores from sentiment lexicons and neighborhood relations between segments. We use concepts and algorithms from statistical relational learning and, in particular, Markov logic networks (Richardson and Domingos, 2006) .",
                "cite_spans": [
                    {
                        "start": 279,
                        "end": 310,
                        "text": "(Richardson and Domingos, 2006)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistical-Relational Representation",
                "sec_num": "3"
            },
            {
                "text": "We briefly introduce Markov logic as a framework for combining numerical and structural features and describe how the problem of fine-grained sentiment analysis based on multiple lexicons and discourse relations can be represented in the language. Most probable polarity classifications are then derived by computing maximum a-posteriori (MAP) states in the ground Markov logic network.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistical-Relational Representation",
                "sec_num": "3"
            },
            {
                "text": "Markov logic (Richardson and Domingos, 2006) can be as a first-order template language for loglinear models with binary variables. Log-linear models are parameterizations of undirected graphical models (Markov networks) which play an important role in the areas of reasoning under uncertainty (Koller and Friedman, 2009) and statistical relational learning (Getoor and Taskar, 2007) . Please note that log-linear models are also known as maximum-entropy models in the NLP community (Manning and Sch\u00fctze, 1999) . The features of a log-linear model can be complex and allow the user to incorporate prior knowledge about what types of data are expected to be important for classification.",
                "cite_spans": [
                    {
                        "start": 13,
                        "end": 44,
                        "text": "(Richardson and Domingos, 2006)",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 293,
                        "end": 320,
                        "text": "(Koller and Friedman, 2009)",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 357,
                        "end": 382,
                        "text": "(Getoor and Taskar, 2007)",
                        "ref_id": "BIBREF8"
                    },
                    {
                        "start": 482,
                        "end": 509,
                        "text": "(Manning and Sch\u00fctze, 1999)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Markov Logic Networks",
                "sec_num": "3.1"
            },
            {
                "text": "A Markov network M is an undirected graph whose nodes represent a set of random variables X = {X 1 , ..., X n } and whose edges model direct probabilistic interactions between adjacent nodes. More formally, a distribution P is a log-linear model over a Markov network M if it is associated with:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Markov Logic Networks",
                "sec_num": "3.1"
            },
            {
                "text": "\u2022 a set of features {f 1 (D 1 ), ..., f k (D k )}, where each D i is a clique in M and each f i is a function from D i to R,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Markov Logic Networks",
                "sec_num": "3.1"
            },
            {
                "text": "\u2022 a set of real-valued weights w 1 , ..., w k , such that",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Markov Logic Networks",
                "sec_num": "3.1"
            },
            {
                "text": "P (X = x) = 1 Z exp k i=1 w i f i (D i ) ,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Markov Logic Networks",
                "sec_num": "3.1"
            },
            {
                "text": "where Z is a normalization constant.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Markov Logic Networks",
                "sec_num": "3.1"
            },
            {
                "text": "A Markov logic network is a set of pairs (F i , w i ) where each F i is a first-order formula and each w i a real-valued weight associated with F i . With a finite set of constants C it defines a loglinear model over possible worlds {x} where each variable X j corresponds to a ground atom and feature f i is the number of true groundings (instantiations) of F i with respect to C in possible world x. Possible worlds are truth assignments to all ground atoms with respect to the set of constants C. We explicitly distinguish between weighted formulas and deterministic formulas, that is, formulas that always have to hold.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Markov Logic Networks",
                "sec_num": "3.1"
            },
            {
                "text": "There are two common types of inference tasks for a Markov logic network: Maximum a-posteriori inference and (conditional) probability inference. The latter computes the posterior probability distribution over a subset of the variables given an instantiation of a set of evidence variables. MAP inference, however, is concerned with finding a joint assignment to a subset of variables with maximal probability. Assume we are given a set X \u2286 X of instantiated variables and let Y = X \\ X . Then, a most probable state of the ground Markov logic network is given by",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": null
            },
            {
                "text": "argmax y k i=1 w i f i (D i ).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Inference",
                "sec_num": null
            },
            {
                "text": "Given a set of first-order formulas and a set of ground atoms, we wish to find the formulas' maximum a posteriori (MAP) weights, that is, the weights that maximize the log-likelihood of the hidden variables given the evidence. There exist several learning algorithms for Markov logic such as voted perceptron, contrastive divergence, and scaled conjugate gradient (Lowd and Domingos, 2007) .",
                "cite_spans": [
                    {
                        "start": 364,
                        "end": 389,
                        "text": "(Lowd and Domingos, 2007)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameter Learning",
                "sec_num": null
            },
            {
                "text": "We employed the voted perceptron learner for the experiments (Richardson and Domingos, 2006; Lowd and Domingos, 2007; Riedel, 2008) which performs gradient descent steps to approximately optimize the conditional log-likelihood. In a MLN, the derivative of the conditional loglikelihood with respect to a weight w i is the difference between the number of true groundings f i of the formula F i in the training data and the expected number of groundings according to the model with weights w",
                "cite_spans": [
                    {
                        "start": 61,
                        "end": 92,
                        "text": "(Richardson and Domingos, 2006;",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 93,
                        "end": 117,
                        "text": "Lowd and Domingos, 2007;",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 118,
                        "end": 131,
                        "text": "Riedel, 2008)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameter Learning",
                "sec_num": null
            },
            {
                "text": "g i = \u2202 \u2202w i log P (Y = y|X = x ) = f i \u2212 E w [f i ].",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameter Learning",
                "sec_num": null
            },
            {
                "text": "The expected number of true groundings E w [f i ] is determined by (approximately) computing a MAP state with the current weights w. The perceptron update rule for the set of weights w for epoch t+1 is then",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameter Learning",
                "sec_num": null
            },
            {
                "text": "w t+1 = w t + \u03b7g,",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameter Learning",
                "sec_num": null
            },
            {
                "text": "where \u03b7 is the learning rate. Online learners repeat these steps updating the weight vector for a predetermined number of n epochs.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Parameter Learning",
                "sec_num": null
            },
            {
                "text": "Each discourse segment s is modeled with a constant symbol c \u2208 C. The set C, therefore, models the discourse segments in the text under consideration and comprises the set of constants of the Markov logic network. The segments s 1 , s 2 , and s 3 depicted in Figure 1 , for instance, would be modeled using the constant symbols c 1 , c 2 and c 3 . We represent the polarity of a segment using two non-observable predicates positive and negative.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 259,
                        "end": 267,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Markov Logic Formulation",
                "sec_num": "3.2"
            },
            {
                "text": "Note that the state of variables modeling nonobservable ground predicates is only known during weight learning. We first formulate the fact that a segment is positive or negative but cannot be positive and negative at the same time using the following deterministic formulas:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Markov Logic Formulation",
                "sec_num": "3.2"
            },
            {
                "text": "\u2200x : \u00acpositive(x) \u21d2 negative(x) \u2200x : negative(x) \u21d2 \u00acpositive(x)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Markov Logic Formulation",
                "sec_num": "3.2"
            },
            {
                "text": "Furthermore, the model incorporates several numerical a-priori features such as the polarity scores of individual segments provided by external lexical resources. We introduce these features in the experimental section in more detail. For each of these features we wish to include in the model, we first add the following deterministic equivalence formulas \u2200x : positive source (x) \u21d4 positive(x) \u2200x : negative source (x) \u21d4 negative(x) Now, in order to include a-priori polarity scores, we add the weighted formula positive source (x) and scale the contribution of a true ground atom positive source (s) with the a-priori polarity score of the particular segment s. This way, the parameter learning algorithm balances the contributions of the different sources according to their accuracy on the training data. The framework \"Markov theBeast\" 2 which we used for our experiments allows to add such real-valued features (Riedel, 2008) .",
                "cite_spans": [
                    {
                        "start": 918,
                        "end": 932,
                        "text": "(Riedel, 2008)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Markov Logic Formulation",
                "sec_num": "3.2"
            },
            {
                "text": "The novel contribution of the present paper, however, is the addition of structural features, that is, features that model specific dependencies holding between the segments of a review. We distinguish two different types of such features, namely, neighborhood relations and discourse relations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Markov Logic Formulation",
                "sec_num": "3.2"
            },
            {
                "text": "The intuition behind neighborhood relations is that neighboring segments are more likely to have the same polarity. We model the fact that a segment precedes another segment with the observable predicate pre. Each sentence is represented as a set of ground predicates instantiated by constants modeling consecutive sentence segments. The sentence depicted in Figure 1 , for instance, would be represented with the two ground atoms pre(c 1 , c 2 ) and pre(c 2 , c 3 ). The following formulas are included in the Markov logic formulation to model the dependency of preceding segments.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 359,
                        "end": 367,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Neighborhood Relations",
                "sec_num": "3.2.1"
            },
            {
                "text": "\u2200x, y : pre(x, y) \u2227 positive(x) \u21d2 positive(y) \u2200x, y : pre(x, y) \u2227 negative(x) \u21d2 negative(y)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Neighborhood Relations",
                "sec_num": "3.2.1"
            },
            {
                "text": "The weights of the above formulas (a subset of the parameters of the model) are learned during training.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Neighborhood Relations",
                "sec_num": "3.2.1"
            },
            {
                "text": "While there are numerous types of possible discourse relations we decided to only distinguish between contrast relations (contrast) and all other types of relations (nconstrast) due to their potential impact on polarity changes between discourse segments. In principle, however, it is possible to extend the model to also incorporate additional relations. The sentence shown in Figure 1 , for instance, would be represented with the two ground atoms contrast(c 1 , c 2 ) and ncontrast(c 2 , c 3 ).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 378,
                        "end": 386,
                        "text": "Figure 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Discourse Relations",
                "sec_num": "3.2.2"
            },
            {
                "text": "In order to leverage contrast relations, we included the following formulas in the Markov logic formulation, modeling how the absence of contrast relations between segments influences their potential polarity changes.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discourse Relations",
                "sec_num": "3.2.2"
            },
            {
                "text": "\u2200x, y : contrast(x, y) \u2227 positive(x) \u21d2 negative(y) \u2200x, y : contrast(x, y) \u2227 negative(x) \u21d2 positive(y) \u2200x, y : ncontrast(x, y) \u2227 positive(x) \u21d2 positive(y) \u2200x, y : ncontrast(x, y) \u2227 negative(x) \u21d2 negative(y)",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discourse Relations",
                "sec_num": "3.2.2"
            },
            {
                "text": "Again, the weights of the above formulas are learned in the training phase.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discourse Relations",
                "sec_num": "3.2.2"
            },
            {
                "text": "The classification of a given set of segments is now equivalent to computing a maximum aposteriori (MAP) state of the respective ground Markov logic network.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Discourse Relations",
                "sec_num": "3.2.2"
            },
            {
                "text": "In what follows, we describe the individual components of our sentiment analysis system and the data we used to experimentally evaluate it. For the evaluation, we first combine real-valued polarity scores derived from sentiment lexicons using Markov logic networks and classify all segments of a product review. We then investigate whether the addition of certain structural features improves the performance of the system.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experiments",
                "sec_num": "4"
            },
            {
                "text": "We chose a subset of the the Multi-Domain Sentiment Dataset arranged by Blitzer et al. (2007) and annotated it for our purpose. The Multi-Domain Sentiment Dataset consists of user-written product reviews downloaded from the web page http://amazon.com. The reviews are subdivided according to their topics. We included the three categories \"Cell Phones & Service\", \"Gourmet Food\" and \"Kitchen & Housewares\".",
                "cite_spans": [
                    {
                        "start": 72,
                        "end": 93,
                        "text": "Blitzer et al. (2007)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Data",
                "sec_num": "4.1"
            },
            {
                "text": "Each category consists of up to 100 reviews. A review is already classified as positive or negative according to the amount of stars the user has chosen for the product along with their review. To achieve a balanced corpus, we picked the 20 longest positive and the 20 longest negative reviews for each of the two topics, resulting in a complete amount of 120 reviews. Table 1 lists the three categories and their respective numbers of segments.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 369,
                        "end": 376,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Data",
                "sec_num": "4.1"
            },
            {
                "text": "Three independent annotators were instructed to label all passages of a review as positive, negative or neutral. Here, a passage is defined as a sequence of words sharing the same opinion. Each word of a review belongs to exactly one passage. The annotators were instructed to choose arbitrary passage boundaries independent of sentence or clause limits. The inter-annotator agreement among the three annotators varies from \u03ba = 0.40 to \u03ba = 0.45 for negative reviews, which is considered only fair agreement, and from \u03ba = 0.60 to \u03ba = 0.84 for positive reviews which is considered strong agreement according to Fleiss kappa (Fleiss, 1971) . In our experiments, we only use the two classes positive and negative. Because of the individual segmentations, we processed the corpus word by word to determine the final polarity labels. For each word, we considered the three polarity labels the annotators had chosen for the respective passages containing the word. If one of the labels positive or negative was used in the majority, we chose this as the final label. Whenever the majority of the annotators picked neutral or each of the annotators chose a different label the general polarity of the entire review as given by the data set was taken as final label. This is because we estimate the user chose the star-rating according to his overall opinion on the product he is reviewing. This general opinion is expressed by the review text and, therefore, the \"standard\" label for the review represents the overall opinion. The numbers of positive and negative segments according to the gold standard are shown in Table 1 .",
                "cite_spans": [
                    {
                        "start": 622,
                        "end": 636,
                        "text": "(Fleiss, 1971)",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 1609,
                        "end": 1616,
                        "text": "Table 1",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Gold Standard",
                "sec_num": "4.1.1"
            },
            {
                "text": "The final output of our fine-grained sentiment analysis system are discourse segments labeled as positive or negative. To compare them to the gold standard, we determine the polarity labels of all tokens belonging to the segment in the gold standard and take the most-chosen label. Again, if there is the same amount of positive and negative labels, we take the overall polarity of the whole review as label.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Gold Standard",
                "sec_num": "4.1.1"
            },
            {
                "text": "For each segment, we estimate prior positivity and negativity scores using state-of-the-art sentiment classification methods. There are two basic ways to classify polarity. One of the most common approaches is to train a classifier on labeled data that works with a bag-of-words model or uses similar features. However, named approach will have difficulties with the short text segments our system is focused on.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Polarity Features",
                "sec_num": "4.2"
            },
            {
                "text": "Another method for polarity classification is to look up terms in a pre-compiled sentiment lexicon that lists terms and their polarities. We chose the latter method for several reasons. First, lexiconbased methods do not rely on large amounts of training data. Second, lexicons can easily be exchanged or added which makes the approach more flexible. Third, the use of Markov logic allows us to combine several lexicons without additional effort. To compute the positivity and negativity score for a segment according to a lexicon, we first look up the positivity as well as the negativity of each term of the segment in this lexicon. Then, we average the positivity as well as the negativity scores. This leads to one positivity score and one negativity score per lexicon for each segment. We use a simple heuristic to consider negated polarity terms such as in not good. To this end, we manually compiled a list of negation terms 3 . Every time we detect such a negation indicator within a segment, we switch the positivity and the negativity scores of all terms occurring after said negation. We employ the following lexicons:",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Polarity Features",
                "sec_num": "4.2"
            },
            {
                "text": "\u2022 SentiWordNet (SWN) SWN (Esuli and Sebastiani, 2006 ) is a lexical resource that contains positivity-scores, negativity-scores and objectivity-scores for WordNet (Fellbaum, 1998) synsets. The scores are between 0.0 and 1.0 and all three scores for a synset sum up to 1.0. For our system, we only regard positivity scores and negativity scores. We use a part-of-speech tagger and take the first word sense.",
                "cite_spans": [
                    {
                        "start": 25,
                        "end": 52,
                        "text": "(Esuli and Sebastiani, 2006",
                        "ref_id": "BIBREF5"
                    },
                    {
                        "start": 163,
                        "end": 179,
                        "text": "(Fellbaum, 1998)",
                        "ref_id": null
                    }
                ],
                "ref_spans": [
                    {
                        "start": 15,
                        "end": 20,
                        "text": "(SWN)",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Polarity Features",
                "sec_num": "4.2"
            },
            {
                "text": "\u2022 Taboada and Grieve's Turney Adjective List (TGL) Taboada and Grieve (2004) created a list containing adjectives and their polarities based on a method described by Turney (2002) . They first query a search engine for the adjective together with some manually chosen clearly positive adjectives, using the nearoperator, then they do the same with a list of negative adjectives. Finally, they calculate the point-wise mutual information (Church and Hanks, 1990) between the queries.",
                "cite_spans": [
                    {
                        "start": 166,
                        "end": 179,
                        "text": "Turney (2002)",
                        "ref_id": "BIBREF27"
                    },
                    {
                        "start": 437,
                        "end": 461,
                        "text": "(Church and Hanks, 1990)",
                        "ref_id": "BIBREF2"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Polarity Features",
                "sec_num": "4.2"
            },
            {
                "text": "\u2022 Unigram Lexicon (UL) There are terms whose polarity depends on the context they are used in. Consider for instance the word large: a large screen is good while a large cell phone is likely bad.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Polarity Features",
                "sec_num": "4.2"
            },
            {
                "text": "To take domain-dependence into account, we compile a list of common positive and negative unigrams as well as punctuation marks for each of the three topics separately. Since we need 40 reviews per topic for the evaluation only the remaining reviews are used to compile the unigram lexicon. From this data, we calculate the ratio of all occurrences of a unigram in positive reviews to its occurrences in negative reviews and use this ratio as the positivity and negativity scores, respectively.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Polarity Features",
                "sec_num": "4.2"
            },
            {
                "text": "We employ the discourse parser HILDA developed by duVerle and Prendinger (2009). It performs two tasks: First, it splits the review text into discourse segments which constitute the basic entities our system classifies. Second, it determines the discourse relations between segments. The actual output of HILDA is the discourse tree of a text. We convert the tree structure to a linear sequence of relations between neighboring segments. HILDA uses the set of relation labels described by Soricut and Marcu (2003) which is coarser-grained than RST and consists of 18 labels. For the experiments, we distinguished two types of relations: relations labeled as contrast and all other relations. We refer to this class as ncontrast. We model these two relations in the Markov logic framework as described in Section 3.2. We want to investigate whether the use of a discourse parser is improving fine-grained sentiment analysis. Table 2 : Results (%) for the different systems. P = precision, R = recall, F = F-measure, A = accuracy the discourse parser constitute the basic units for our sentiment classification system. Evaluating the correctness of discourse parsing is a hard task. However, it is not of prime importance for our task that the segments are correct according to any discourse theory but that they do not include passages containing differing labels according to the gold standard. An analysis of the data shows that only 3.2% of the segments contain contradictory labels. We therefore concluded that it is appropriate to use the discourse segments as basic units for the evaluation of our system.",
                "cite_spans": [
                    {
                        "start": 489,
                        "end": 513,
                        "text": "Soricut and Marcu (2003)",
                        "ref_id": "BIBREF24"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 924,
                        "end": 931,
                        "text": "Table 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Discourse Parsing",
                "sec_num": "4.2.1"
            },
            {
                "text": "The goal of our system is to label discourse segments of a review as positive or negative. We employed the Markov logic network implementation \"Markov theBeast\" (Riedel, 2008) . We compare three different Markov logic networks. First, we only take into account the realvalued polarity features. We consider this ML formulation (\"MLN polarity\") a baseline to evaluate the quality of the evidence collected from the sentiment lexicons. To compare the performance of this system to the state of the art in classification algorithms, we train a Support Vector Machine (SVM) (Platt, 1998; Keerthi et al., 2001; Hastie and Tibshirani, 1998) on the polarity features. In a second Markov logic formulation (MLN neighborhood), we incorporate structural information about neighboring segments using the formulas described in section 3.2.1. In order to assess the impact of explicitly distinguishing between contrast and ncontrast relations, we use the Markov logic formulation described in Section 3.2.2 (MLN contrast).",
                "cite_spans": [
                    {
                        "start": 161,
                        "end": 175,
                        "text": "(Riedel, 2008)",
                        "ref_id": "BIBREF21"
                    },
                    {
                        "start": 570,
                        "end": 583,
                        "text": "(Platt, 1998;",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 584,
                        "end": 605,
                        "text": "Keerthi et al., 2001;",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 606,
                        "end": 634,
                        "text": "Hastie and Tibshirani, 1998)",
                        "ref_id": "BIBREF9"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental Setting",
                "sec_num": "4.3"
            },
            {
                "text": "We learn the weight parameters of the Markov logic networks by running the voted perceptron online learner for 20 epochs (Riedel, 2008) . We then evaluate each of the classification algorithms with 10-fold cross validation. Table 2 lists the evaluation results for the different classifiers. To determine statistical significance of the relative effectiveness of two classifiers we applied a paired t-test at a significance level of p < 0.01. The classifiers exclusively using polarity features have comparable accuracy values. While the SVM is showing a bias towards classifying segments as negative ML polarity shows the opposite trend. Although the accuracy of SVM is slightly higher the relative difference of the accuracy values is not statistically significant. Including neighborhood relations increases the effectiveness relative to both non-structure based classifiers significantly. MLN neigborhood achieves an F-measure of 69.50% for positive segments and 68.52% for negative segments with an overall accuracy of 69.02%. It also significantly outperforms the majority baseline which achieves an accuracy of 51.60%. Contrary to our hypothesis, distinguishing between contrast and ncontrast relations did not improve the effectiveness relative to MLN neigborhood. MLN contrast achieves a slightly lower accuracy than MLN neighborhood although the difference is not statistically significant. These results suggest that the correlation of contrast relations and polarity changes is not significant. Furthermore, the number of contrast relations in product reviews is too small to have a significant impact. Finally, employing a discourse parser as a component of a sentiment analysis poses the problem that misclassifications might as well be caused by erroneous decisions of the component. Figure 2 depicts the accuracy values for the different classifiers on each of the ten cross-validation folds.",
                "cite_spans": [
                    {
                        "start": 121,
                        "end": 135,
                        "text": "(Riedel, 2008)",
                        "ref_id": "BIBREF21"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 224,
                        "end": 231,
                        "text": "Table 2",
                        "ref_id": null
                    },
                    {
                        "start": 1799,
                        "end": 1807,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Experimental Setting",
                "sec_num": "4.3"
            },
            {
                "text": "To the best of our knowledge, there is no sen- Figure 2 : Accuracy values of the various algorithms for the 10 different cross-validation folds.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 47,
                        "end": 55,
                        "text": "Figure 2",
                        "ref_id": null
                    }
                ],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4.4"
            },
            {
                "text": "timent analysis system operating on the discourse segment level to which we could compare our results. However, the task is similar to that approached by Kim and Hovy (2006) whose system achieves an accuracy of 57% classifying whole sentences of reviews as positive or negative. In T\u00e4ckstr\u00f6m and McDonald (2011) , the authors present a semi-supervised approach classifying sentences as positive, negative or neutral. Their approach achieves an accuracy of up to 59.1%. Considering the fact that our system is working on subsentence level we find our results promising.",
                "cite_spans": [
                    {
                        "start": 154,
                        "end": 173,
                        "text": "Kim and Hovy (2006)",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 282,
                        "end": 311,
                        "text": "T\u00e4ckstr\u00f6m and McDonald (2011)",
                        "ref_id": "BIBREF26"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Results",
                "sec_num": "4.4"
            },
            {
                "text": "In this work, we addressed the problem of finegrained sentiment analysis on subsentence level, achieving an accuracy of 69%. We proposed a sentiment classification method that uses Markov logic as a means to integrate polarity information from different sources and to explicitly use information about the structure of text to determine the polarity of text segments. The approach has a number of advantages. It is flexible enough to incorporate polarity scores from various sources. We used two pre-existing sentiment lexicons. To capture domain-dependent knowledge, we compiled an individual lexicon for each domain from training data. The presented approach, however, is not restricted to these sources and can include any source of polarity features. It allows for an easy combination of various existing methods into a single polarity judgement. Moreover, its major advantage is the inclusion of structural information. Again, this ability is more or less independent from a concrete method. In our work we used an existing discourse parser, however, other meth-ods for determining the discourse structure could be used as well. Finally, the Markov logic representation can be used in a supervised and in an unsupervised setting. The experiments described in the paper are based on the supervised setting: we used a manually annotated corpus to learn weights for the formulas in the Markov logic model. In cases, where no annotated corpus is available, we could still set the weights by hand and experiment with different settings until a good setting is found.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future Work",
                "sec_num": "5"
            },
            {
                "text": "Concerning fine-grained sentiment analysis the main result of our work is that the use of general structures found in the text systematically improves the results. As described in the paper, it turned out, however that the relation between the contrast relation and the change of polarity is not as close as we had expected. This means that the classical discourse relations are not necessarily the best choice concerning text structures to be taken into account. However, we think that focusing on cue words for discourse connectives is worth being investigated to determine features that allow us to more accurately predict such polarity changes. Further, in the work reported here, we only considered positive and negative polarity. This raises some questions concerning the treatment of segments that do not have a clear polarity. In future work, we will therefore extend our experiments to the case where segments can be classified as positive, negative or neutral.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusion and Future Work",
                "sec_num": "5"
            },
            {
                "text": "Please note that in this work we do not distinguish between CONCESSION and CONTRAST relations and consider both as CONTRAST relations. In the following, we will refer to all other kind of relations as NO CONTRAST relations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "The code can be downloaded at http://code. google.com/p/thebeast/",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            },
            {
                "text": "We used the negation indicators no, cannot, not, none, nothing, nowhere, neither, nor, nobody, hardly, scarcely, barely and all negations of auxiliaries modals ending on n't, like don't or won't.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "",
                "sec_num": null
            }
        ],
        "back_matter": [
            {
                "text": "We would like to thank Anette Frank for useful comments in the early phase of this work, and the annotators for annotating the product reviews.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgments",
                "sec_num": null
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Appraisal of opinion expressions in discourse",
                "authors": [
                    {
                        "first": "Nicholas",
                        "middle": [],
                        "last": "Asher",
                        "suffix": ""
                    },
                    {
                        "first": "Farah",
                        "middle": [],
                        "last": "Benamara",
                        "suffix": ""
                    },
                    {
                        "first": "Yannick",
                        "middle": [],
                        "last": "Mathieu",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Lingvisticae Investigations",
                "volume": "31",
                "issue": "",
                "pages": "279--292",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Nicholas Asher, Farah Benamara, and Yannick Math- ieu. 2009. Appraisal of opinion expressions in dis- course. Lingvisticae Investigations, 31(2):279-292.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification",
                "authors": [
                    {
                        "first": "John",
                        "middle": [],
                        "last": "Blitzer",
                        "suffix": ""
                    },
                    {
                        "first": "Mark",
                        "middle": [],
                        "last": "Dredze",
                        "suffix": ""
                    },
                    {
                        "first": "Fernando",
                        "middle": [],
                        "last": "Pereira",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proc. of ACL-07",
                "volume": "",
                "issue": "",
                "pages": "440--447",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, boom-boxes and blenders: Domain adaptation for sentiment classi- fication. In Proc. of ACL-07, pages 440-447.",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "Word association norms, mutual information, and lexicography",
                "authors": [
                    {
                        "first": "Kenneth",
                        "middle": [
                            "Ward"
                        ],
                        "last": "Church",
                        "suffix": ""
                    },
                    {
                        "first": "Patrick",
                        "middle": [],
                        "last": "Hanks",
                        "suffix": ""
                    }
                ],
                "year": 1990,
                "venue": "Computational Linguistics",
                "volume": "16",
                "issue": "1",
                "pages": "22--29",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Kenneth Ward Church and Patrick Hanks. 1990. Word association norms, mutual information, and lexicog- raphy. Computational Linguistics, 16(1):22-29.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "A holistic lexicon-based approach to opinion mining",
                "authors": [
                    {
                        "first": "Xiaowen",
                        "middle": [],
                        "last": "Ding",
                        "suffix": ""
                    },
                    {
                        "first": "Bing",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    },
                    {
                        "first": "Philip",
                        "middle": [
                            "S"
                        ],
                        "last": "Yu",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proc. of WSDM-08",
                "volume": "",
                "issue": "",
                "pages": "231--240",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Xiaowen Ding, Bing Liu, and Philip S. Yu. 2008. A holistic lexicon-based approach to opinion mining. In Proc. of WSDM-08, pages 231-240.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "A novel discourse parser based on support vector classification",
                "authors": [
                    {
                        "first": "David",
                        "middle": [],
                        "last": "Duverle",
                        "suffix": ""
                    },
                    {
                        "first": "Helmut",
                        "middle": [],
                        "last": "Prendinger",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proc. of ACL-IJCNLP-09",
                "volume": "",
                "issue": "",
                "pages": "665--673",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "David duVerle and Helmut Prendinger. 2009. A novel discourse parser based on support vector classifica- tion. In Proc. of ACL-IJCNLP-09, pages 665-673.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Senti-WordNet: A publicly available lexical resource for opinion mining",
                "authors": [
                    {
                        "first": "Andrea",
                        "middle": [],
                        "last": "Esuli",
                        "suffix": ""
                    },
                    {
                        "first": "Fabrizio",
                        "middle": [],
                        "last": "Sebastiani",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proc. of LREC '06",
                "volume": "",
                "issue": "",
                "pages": "417--422",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Andrea Esuli and Fabrizio Sebastiani. 2006. Senti- WordNet: A publicly available lexical resource for opinion mining. In Proc. of LREC '06, pages 417- 422.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "WordNet: An Electronic Lexical Database",
                "authors": [],
                "year": 1998,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Christiane Fellbaum, editor. 1998. WordNet: An Elec- tronic Lexical Database. MIT Press, Cambridge, Mass.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "Measuring nominal scale agreement among many raters",
                "authors": [
                    {
                        "first": "Joseph",
                        "middle": [
                            "L"
                        ],
                        "last": "Fleiss",
                        "suffix": ""
                    }
                ],
                "year": 1971,
                "venue": "Psychological Bulletin",
                "volume": "76",
                "issue": "5",
                "pages": "378--382",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Joseph L. Fleiss. 1971. Measuring nominal scale agreement among many raters. Psychological Bul- letin, 76(5):378-382.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "Introduction to Statistical Relational Learning",
                "authors": [
                    {
                        "first": "Lise",
                        "middle": [],
                        "last": "Getoor",
                        "suffix": ""
                    },
                    {
                        "first": "Ben",
                        "middle": [],
                        "last": "Taskar",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Lise Getoor and Ben Taskar. 2007. Introduction to Statistical Relational Learning. MIT Press.",
                "links": null
            },
            "BIBREF9": {
                "ref_id": "b9",
                "title": "Classification by pairwise coupling",
                "authors": [
                    {
                        "first": "Trevor",
                        "middle": [],
                        "last": "Hastie",
                        "suffix": ""
                    },
                    {
                        "first": "Robert",
                        "middle": [],
                        "last": "Tibshirani",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Advances in Neural Information Processing Systems",
                "volume": "10",
                "issue": "",
                "pages": "451--471",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Trevor Hastie and Robert Tibshirani. 1998. Classifi- cation by pairwise coupling. In Michael I. Jordan, Michael J. Kearns, and Sara A. Solla, editors, Ad- vances in Neural Information Processing Systems, volume 10, pages 451-471. MIT Press.",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "Mining and summarizing customer reviews",
                "authors": [
                    {
                        "first": "Minqing",
                        "middle": [],
                        "last": "Hu",
                        "suffix": ""
                    },
                    {
                        "first": "Bing",
                        "middle": [],
                        "last": "Liu",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proc. of ACM SIGKDD '04",
                "volume": "",
                "issue": "",
                "pages": "168--177",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Minqing Hu and Bing Liu. 2004. Mining and summa- rizing customer reviews. In Proc. of ACM SIGKDD '04, pages 168-177.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Improvements to Platt's SMO algorithm for SVM classifier design",
                "authors": [
                    {
                        "first": "S",
                        "middle": [
                            "S"
                        ],
                        "last": "Keerthi",
                        "suffix": ""
                    },
                    {
                        "first": "S",
                        "middle": [
                            "K"
                        ],
                        "last": "Shevade",
                        "suffix": ""
                    },
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Bhattacharyya",
                        "suffix": ""
                    },
                    {
                        "first": "K",
                        "middle": [
                            "R K"
                        ],
                        "last": "Murthy",
                        "suffix": ""
                    }
                ],
                "year": 2001,
                "venue": "Neural Computation",
                "volume": "13",
                "issue": "3",
                "pages": "637--649",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "S.S. Keerthi, S.K. Shevade, C. Bhattacharyya, and K.R.K. Murthy. 2001. Improvements to Platt's SMO algorithm for SVM classifier design. Neural Computation, 13(3):637-649.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "Targeting sentiment expressions through supervised ranking of linguistic configurations",
                "authors": [
                    {
                        "first": "Jason",
                        "middle": [
                            "S"
                        ],
                        "last": "Kessler",
                        "suffix": ""
                    },
                    {
                        "first": "Nicolas",
                        "middle": [],
                        "last": "Nicolov",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proc. of ICWSM-09",
                "volume": "",
                "issue": "",
                "pages": "90--97",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Jason S. Kessler and Nicolas Nicolov. 2009. Target- ing sentiment expressions through supervised rank- ing of linguistic configurations. In Proc. of ICWSM- 09, pages 90-97.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Automatic identification of pro and con reasons in online reviews",
                "authors": [
                    {
                        "first": "Min",
                        "middle": [],
                        "last": "Soo",
                        "suffix": ""
                    },
                    {
                        "first": "Eduard",
                        "middle": [],
                        "last": "Kim",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Hovy",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Proc. of COLING-ACL-06 Poster Session",
                "volume": "",
                "issue": "",
                "pages": "483--490",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Soo-Min Kim and Eduard Hovy. 2006. Automatic identification of pro and con reasons in online re- views. In Proc. of COLING-ACL-06 Poster Session, pages 483-490.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "Probabilistic Graphical Models: Principles and Techniques",
                "authors": [
                    {
                        "first": "Daphne",
                        "middle": [],
                        "last": "Koller",
                        "suffix": ""
                    },
                    {
                        "first": "Nir",
                        "middle": [],
                        "last": "Friedman",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Daphne Koller and Nir Friedman. 2009. Probabilistic Graphical Models: Principles and Techniques. MIT Press.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "Efficient weight learning for Markov logic networks",
                "authors": [
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Lowd",
                        "suffix": ""
                    },
                    {
                        "first": "Pedro",
                        "middle": [],
                        "last": "Domingos",
                        "suffix": ""
                    }
                ],
                "year": 2007,
                "venue": "Proc. of ECML/PKDD-07",
                "volume": "",
                "issue": "",
                "pages": "200--211",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Daniel Lowd and Pedro Domingos. 2007. Efficient weight learning for Markov logic networks. In Proc. of ECML/PKDD-07, pages 200-211.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Rhetorical structure theory. Toward a functional theory of text organization",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "William",
                        "suffix": ""
                    },
                    {
                        "first": "Sandra",
                        "middle": [
                            "A"
                        ],
                        "last": "Mann",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Thompson",
                        "suffix": ""
                    }
                ],
                "year": 1988,
                "venue": "Text",
                "volume": "8",
                "issue": "3",
                "pages": "243--281",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "William C. Mann and Sandra A. Thompson. 1988. Rhetorical structure theory. Toward a functional the- ory of text organization. Text, 8(3):243-281.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Foundations of statistical natural language processing",
                "authors": [
                    {
                        "first": "D",
                        "middle": [],
                        "last": "Christopher",
                        "suffix": ""
                    },
                    {
                        "first": "Hinrich",
                        "middle": [],
                        "last": "Manning",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Sch\u00fctze",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Christopher D. Manning and Hinrich Sch\u00fctze. 1999. Foundations of statistical natural language process- ing. MIT Press.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "Fast training of support vector machines using sequential minimal optimization",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Platt",
                        "suffix": ""
                    }
                ],
                "year": 1998,
                "venue": "Advances in Kernel Methods -Support Vector Learning",
                "volume": "",
                "issue": "",
                "pages": "185--208",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. Platt. 1998. Fast training of support vector ma- chines using sequential minimal optimization. In B. Schoelkopf, C. Burges, and A. Smola, edi- tors, Advances in Kernel Methods -Support Vector Learning, pages 185-208. MIT Press.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "Extracting product features and opinions from reviews",
                "authors": [
                    {
                        "first": "Ana-Maria",
                        "middle": [],
                        "last": "Popescu",
                        "suffix": ""
                    },
                    {
                        "first": "Oren",
                        "middle": [],
                        "last": "Etzioni",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Proc. HLT-EMNLP '05",
                "volume": "",
                "issue": "",
                "pages": "339--346",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Ana-Maria Popescu and Oren Etzioni. 2005. Extract- ing product features and opinions from reviews. In Proc. HLT-EMNLP '05, pages 339-346.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "Markov logic networks",
                "authors": [
                    {
                        "first": "Matthew",
                        "middle": [],
                        "last": "Richardson",
                        "suffix": ""
                    },
                    {
                        "first": "Pedro",
                        "middle": [],
                        "last": "Domingos",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "Machine Learning",
                "volume": "62",
                "issue": "",
                "pages": "107--136",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Matthew Richardson and Pedro Domingos. 2006. Markov logic networks. Machine Learning, 62(1- 2):107-136.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "Improving the accuracy and efficiency of MAP inference for Markov logic",
                "authors": [
                    {
                        "first": "Sebastian",
                        "middle": [],
                        "last": "Riedel",
                        "suffix": ""
                    }
                ],
                "year": 2008,
                "venue": "Proc. of UAI-08",
                "volume": "",
                "issue": "",
                "pages": "468--475",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Sebastian Riedel. 2008. Improving the accuracy and efficiency of MAP inference for Markov logic. In Proc. of UAI-08, pages 468-475.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Recognizing stances in online debates",
                "authors": [
                    {
                        "first": "Swapna",
                        "middle": [],
                        "last": "Somasundaran",
                        "suffix": ""
                    },
                    {
                        "first": "Janyce",
                        "middle": [],
                        "last": "Wiebe",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proc. of ACL-IJCNLP-09",
                "volume": "",
                "issue": "",
                "pages": "226--234",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Swapna Somasundaran and Janyce Wiebe. 2009. Rec- ognizing stances in online debates. In Proc. of ACL- IJCNLP-09, pages 226-234.",
                "links": null
            },
            "BIBREF23": {
                "ref_id": "b23",
                "title": "Supervised and unsupervised methods in employing discourse relations for improving opinion polarity classification",
                "authors": [
                    {
                        "first": "Swapna",
                        "middle": [],
                        "last": "Somasundaran",
                        "suffix": ""
                    },
                    {
                        "first": "Galileo",
                        "middle": [],
                        "last": "Namata",
                        "suffix": ""
                    },
                    {
                        "first": "Janyce",
                        "middle": [],
                        "last": "Wiebe",
                        "suffix": ""
                    },
                    {
                        "first": "Lise",
                        "middle": [],
                        "last": "Getoor",
                        "suffix": ""
                    }
                ],
                "year": 2009,
                "venue": "Proc. EMNLP-09",
                "volume": "",
                "issue": "",
                "pages": "170--179",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Swapna Somasundaran, Galileo Namata, Janyce Wiebe, and Lise Getoor. 2009. Supervised and unsupervised methods in employing discourse rela- tions for improving opinion polarity classification. In Proc. EMNLP-09, pages 170-179.",
                "links": null
            },
            "BIBREF24": {
                "ref_id": "b24",
                "title": "Sentence level discourse parsing using syntactic and lexical information",
                "authors": [
                    {
                        "first": "Radu",
                        "middle": [],
                        "last": "Soricut",
                        "suffix": ""
                    },
                    {
                        "first": "Daniel",
                        "middle": [],
                        "last": "Marcu",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Proc. of HLT-NAACL-03",
                "volume": "",
                "issue": "",
                "pages": "149--156",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Radu Soricut and Daniel Marcu. 2003. Sentence level discourse parsing using syntactic and lexical infor- mation. In Proc. of HLT-NAACL-03, pages 149- 156.",
                "links": null
            },
            "BIBREF25": {
                "ref_id": "b25",
                "title": "Analyzing appraisal automatically",
                "authors": [
                    {
                        "first": "Maite",
                        "middle": [],
                        "last": "Taboada",
                        "suffix": ""
                    },
                    {
                        "first": "Jack",
                        "middle": [],
                        "last": "Grieve",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Proceedings of the AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications",
                "volume": "",
                "issue": "",
                "pages": "158--161",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Maite Taboada and Jack Grieve. 2004. Analyzing ap- praisal automatically. In Proceedings of the AAAI Spring Symposium on Exploring Attitude and Affect in Text: Theories and Applications, Palo Alto, Cal., 22-24 March 2004, pages 158-161.",
                "links": null
            },
            "BIBREF26": {
                "ref_id": "b26",
                "title": "Semisupervised latent variable models for sentence-level sentiment analysis",
                "authors": [
                    {
                        "first": "Oscar",
                        "middle": [],
                        "last": "T\u00e4ckstr\u00f6m",
                        "suffix": ""
                    },
                    {
                        "first": "Ryan",
                        "middle": [],
                        "last": "Mcdonald",
                        "suffix": ""
                    }
                ],
                "year": 2011,
                "venue": "Proc. of ACL-11",
                "volume": "",
                "issue": "",
                "pages": "569--574",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Oscar T\u00e4ckstr\u00f6m and Ryan McDonald. 2011. Semi- supervised latent variable models for sentence-level sentiment analysis. In Proc. of ACL-11, pages 569- 574.",
                "links": null
            },
            "BIBREF27": {
                "ref_id": "b27",
                "title": "Thumbs up or thumbs down? Semantic orientation applied to unsupervised classification of reviews",
                "authors": [
                    {
                        "first": "Peter",
                        "middle": [],
                        "last": "Turney",
                        "suffix": ""
                    }
                ],
                "year": 2002,
                "venue": "Proc. of ACL-02",
                "volume": "",
                "issue": "",
                "pages": "417--424",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Peter Turney. 2002. Thumbs up or thumbs down? Se- mantic orientation applied to unsupervised classifi- cation of reviews. In Proc. of ACL-02, pages 417- 424.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "uris": null,
                "num": null,
                "text": "Despite the pretty design s 2 : I would never recommend it, s 3 : because the quality is bad. Sentiment polarities of and discourse relations between the segments of a sentence.",
                "type_str": "figure"
            },
            "TABREF1": {
                "text": "51.60 100.00 68.07 51.60 SVM 57.05 43.06 49.08 56.44 69.47 62.28 56.66 MLN polarity 53.21 69.58 60.31 59.90 42.62 49.80 55.67 MLN neighborhood 66.38 72.94 69.50 72.02 65.34 68.52 69.02",
                "num": null,
                "html": null,
                "content": "<table><tr><td/><td>positive</td><td/><td/><td>negative</td><td/><td/></tr><tr><td>P</td><td>R</td><td>F</td><td>P</td><td>R</td><td>F</td><td>A</td></tr><tr><td colspan=\"7\">majority baseline 0.00 MLN contrast 0.00 0.00 61.39 73.47 66.89 69.48 56.65 62.41 64.79</td></tr><tr><td/><td/><td/><td/><td colspan=\"3\">The discourse segments determined by</td></tr></table>",
                "type_str": "table"
            }
        }
    }
}