File size: 86,653 Bytes
6fa4bc9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
{
    "paper_id": "O06-1018",
    "header": {
        "generated_with": "S2ORC 1.0.0",
        "date_generated": "2023-01-19T08:07:47.116105Z"
    },
    "title": "MiniJudge: Software for minimalist experimental syntax",
    "authors": [
        {
            "first": "James",
            "middle": [],
            "last": "Myers",
            "suffix": "",
            "affiliation": {
                "laboratory": "",
                "institution": "National Chung Cheng University",
                "location": {
                    "settlement": "Minhsiung",
                    "country": "Taiwan"
                }
            },
            "email": ""
        }
    ],
    "year": "",
    "venue": null,
    "identifiers": {},
    "abstract": "MiniJudge is free online open-source software to help theoretical syntacticians collect and analyze native-speaker acceptability judgments in a way that combines the speed and ease of traditional introspective methods with the power and statistical validity afforded by rigorous experimental design. This paper shows why MiniJudge is useful, what it feels like to use it, and how it works.",
    "pdf_parse": {
        "paper_id": "O06-1018",
        "_pdf_hash": "",
        "abstract": [
            {
                "text": "MiniJudge is free online open-source software to help theoretical syntacticians collect and analyze native-speaker acceptability judgments in a way that combines the speed and ease of traditional introspective methods with the power and statistical validity afforded by rigorous experimental design. This paper shows why MiniJudge is useful, what it feels like to use it, and how it works.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Abstract",
                "sec_num": null
            }
        ],
        "body_text": [
            {
                "text": "Linguistics is a science because linguists test hypotheses against empirical data, but this testing is done in a much more informal way than in almost any other science. Theoretical syntacticians, for example, violate protocols standard in the rest of the cognitive sciences by acting simultaneously as experimenter and subject, and by showing little concern with the issues of experimental design and quantitative analysis deemed essential in most sciences. Linguists recognize that their informally-collected data are often inconclusive; controversial native-speaker judgments are commonplace problems in both research and teaching. From my conversations with syntacticians, I get the sense that they would appreciate a tool for collecting judgments more reliably, yet this tool should be one that permits them to maintain their traditional focus on theory rather than method. reveal MiniJudge's inner workings, which involve some underused or novel statistical techniques.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1."
            },
            {
                "text": "Currently the only implementation of MiniJudge is MiniJudgeJS, which is written in JavaScript, HTML, and the statistical language R (www.r-project.org).",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Introduction",
                "sec_num": "1."
            },
            {
                "text": "Though some readers may wonder why we should bother with judgments when we can simply analyze corpora, judgments and corpus data are actually complementary performance windows into linguistic competence, with their own strengths and weaknesses (see e.g. Penke & Rosenbach, 2004) . The question is how we can extract the maximum value out of judgments in the easiest possible way.",
                "cite_spans": [
                    {
                        "start": 254,
                        "end": 278,
                        "text": "Penke & Rosenbach, 2004)",
                        "ref_id": "BIBREF15"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Balancing speed and reliability in syntactic judgment collection",
                "sec_num": "2."
            },
            {
                "text": "Phillips and Lasnik (2003:61) are entirely right to emphasize that the \"[g]athering of native-speaker judgments is a trivially simple kind of experiment, one that makes it possible to obtain large numbers of highly robust empirical results in a short period of time, from a vast array of languages.\" Even Labov (1996:102) , who generally favors corpus data, admits that \" [f] or the great majority of sentences cited by linguists,\" native-speaker intuitions \"are reliable.\" Yet as Phillips and Lasnik (2003:61) also point out, \"it is a truism in linguistics, widely acknowledged and taken into account, that acceptability ratings can vary for many reasons independent of grammaticality.\" Unfortunately, in actual practice linguists don't take the distinction between \"acceptability\" and \"grammaticality\" as seriously as they know they should, and their \"trivially simple methods\" become merely simple-minded (Sch\u00fctze, 1996) .",
                "cite_spans": [
                    {
                        "start": 305,
                        "end": 321,
                        "text": "Labov (1996:102)",
                        "ref_id": null
                    },
                    {
                        "start": 372,
                        "end": 375,
                        "text": "[f]",
                        "ref_id": null
                    },
                    {
                        "start": 908,
                        "end": 923,
                        "text": "(Sch\u00fctze, 1996)",
                        "ref_id": "BIBREF20"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental syntax",
                "sec_num": "2.1."
            },
            {
                "text": "Since the problem of detecting competence in performance is precisely the problem faced by experimental cognitive scientists every day (e.g., testing vision theories with optical illusions), a reasonable response to the syntactician's empirical challenges would be to adopt the protocols standard in the rest of the experimental cognitive sciences: multiple stimuli and subjects (naive ones rather than the bias-prone experimenters themselves), systematic controls, factorial designs, continuous response measures, filler items, counterbalancing, and statistical analysis. When judgments are collected with these more careful protocols, they often reveal hitherto unsuspected complexity. Recent examples of the growing experimental syntax literature include Sorace & Keller (2005) and Featherston (2005) ; Cowart (1997) is a user-friendly handbook.",
                "cite_spans": [
                    {
                        "start": 758,
                        "end": 780,
                        "text": "Sorace & Keller (2005)",
                        "ref_id": "BIBREF22"
                    },
                    {
                        "start": 785,
                        "end": 803,
                        "text": "Featherston (2005)",
                        "ref_id": "BIBREF6"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Experimental syntax",
                "sec_num": "2.1."
            },
            {
                "text": "Full-fledged experimental syntax is complex, forcing the researcher to spend a lot of time on work that is not theoretically very interesting. The complexity of an experiment should actually be proportional to the subtlety of the effects it is trying to detect. Very clear judgments are detectable with traditional \"trivially simple\" methods; very subtle judgments may require full-fledged experimental methods. But in the vast area in between, a compromise seems appropriate, where methods are powerful enough to yield statistically valid results, yet are simple enough to apply quickly: a minimalist experimental syntax (see Table 1 ). Maximum of two binary factors Very few sentence sets (about 10)",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 627,
                        "end": 634,
                        "text": "Table 1",
                        "ref_id": "TABREF0"
                    }
                ],
                "eq_spans": [],
                "section": "Minimalist experimental syntax",
                "sec_num": "2.2."
            },
            {
                "text": "Random sentence order Very few speakers (about [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] Order treated as a factor in the statistics While conducting a minimalist experiment is much simpler than conducting a full-fledged judgment experiment (an explicit guide is given in Myers 2006) , some steps may still be overly complex and/or intimidating to the novice experimenter, in particular the design of the experimental sentences and the statistical analysis. The purpose of the MiniJudge software is to automate these steps.",
                "cite_spans": [
                    {
                        "start": 47,
                        "end": 51,
                        "text": "[10]",
                        "ref_id": "BIBREF10"
                    },
                    {
                        "start": 52,
                        "end": 56,
                        "text": "[11]",
                        "ref_id": "BIBREF11"
                    },
                    {
                        "start": 57,
                        "end": 61,
                        "text": "[12]",
                        "ref_id": "BIBREF12"
                    },
                    {
                        "start": 62,
                        "end": 66,
                        "text": "[13]",
                        "ref_id": "BIBREF13"
                    },
                    {
                        "start": 67,
                        "end": 71,
                        "text": "[14]",
                        "ref_id": "BIBREF14"
                    },
                    {
                        "start": 72,
                        "end": 76,
                        "text": "[15]",
                        "ref_id": "BIBREF15"
                    },
                    {
                        "start": 77,
                        "end": 81,
                        "text": "[16]",
                        "ref_id": "BIBREF16"
                    },
                    {
                        "start": 82,
                        "end": 86,
                        "text": "[17]",
                        "ref_id": "BIBREF17"
                    },
                    {
                        "start": 87,
                        "end": 91,
                        "text": "[18]",
                        "ref_id": "BIBREF18"
                    },
                    {
                        "start": 92,
                        "end": 96,
                        "text": "[19]",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 97,
                        "end": 101,
                        "text": "[20]",
                        "ref_id": "BIBREF20"
                    },
                    {
                        "start": 285,
                        "end": 296,
                        "text": "Myers 2006)",
                        "ref_id": "BIBREF14"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Minimalist experimental syntax",
                "sec_num": "2.2."
            },
            {
                "text": "To show how MiniJudge is used, I describe a recent application of it to a morphosyntactic issue in Chinese; for another example, see MJInfo.htm#resultshelp, reachable through the MiniJudge homepage.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Using MiniJudge",
                "sec_num": "3."
            },
            {
                "text": "MiniJudge has also been used to run syntax experiments on English and Taiwan Sign Language, as well as to run pilots for larger studies and to help teach basic concepts in experimental design.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Using MiniJudge",
                "sec_num": "3."
            },
            {
                "text": "He (2004) presents an interesting observation regarding the interaction of compound-internal phrase structure and affixation of the plural marker men. Part of his paradigm is shown in Table 2 , where V = verb and O = object (based on his (2) & (4), pp. 2-3). ",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 184,
                        "end": 191,
                        "text": "Table 2",
                        "ref_id": "TABREF1"
                    }
                ],
                "eq_spans": [],
                "section": "Goal of the experiment",
                "sec_num": "3.1."
            },
            {
                "text": "MiniJudgeJS is simply a JavaScript-enabled HTML form. Input and output are handled entirely by text areas; generated text includes code to run statistical analyses in R. Like the rest of the MiniJudge family, MiniJudgeJS divides the experimental process into the steps listed in Table 3 . Table 2 are shown in the first row of Table 5 . The user only has to find parallel substitutes for four segments, rather than having to construct whole new sentences while keeping track of the factorial design (Table 5 also shows the segments needed to generate the new sets in Table 4 ).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 279,
                        "end": 286,
                        "text": "Table 3",
                        "ref_id": "TABREF2"
                    },
                    {
                        "start": 289,
                        "end": 296,
                        "text": "Table 2",
                        "ref_id": "TABREF1"
                    },
                    {
                        "start": 327,
                        "end": 334,
                        "text": "Table 5",
                        "ref_id": "TABREF4"
                    },
                    {
                        "start": 499,
                        "end": 507,
                        "text": "(Table 5",
                        "ref_id": "TABREF4"
                    },
                    {
                        "start": 567,
                        "end": 574,
                        "text": "Table 4",
                        "ref_id": "TABREF3"
                    }
                ],
                "eq_spans": [],
                "section": "The MiniJudgeJS interface",
                "sec_num": "3.2."
            },
            {
                "text": "The segmentation and set generation processes are designed to work equally well in English-like and",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The MiniJudgeJS interface",
                "sec_num": "3.2."
            },
            {
                "text": "Chinese-like orthographies. Of course, since MiniJudge knows no human language, it sometimes makes strange errors, so users are allowed to correct its output, or even to generate new sets manually. After the user has corrected and approved the master list of sentences, it can be saved to a file for use in reports (as I am doing here). In the present experiment, the master list contained 48 sentences (12 sets of 4 sentences each). This is an unusually large number of sentences for a MiniJudge experiment; significant results have been found with experiments with as few as 10 sentences.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The MiniJudgeJS interface",
                "sec_num": "3.2."
            },
            {
                "text": "In order to run a MiniJudge experiment, the user must make three decisions. The first concerns the maximum number of speakers to test. It is possible to get significant results with as few as 7 speakers, but in the present experiment, I generated 30 surveys. As it turned out, only 18 surveys were returned.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Running the experiment",
                "sec_num": "3.4."
            },
            {
                "text": "The second decision concerns whether surveys will be distributed by printed form or by email. In",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Running the experiment",
                "sec_num": "3.4."
            },
            {
                "text": "MiniJudgeJS, printing surveys involves saving the them from a text area and printing them with a word processor. MiniJudgeJS cannot send email automatically, so emailed surveys must be individually copied and pasted. In the present experiment, I emailed thirty students, former students, or faculty of my linguistics department who did not know the purpose of the experiment.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Running the experiment",
                "sec_num": "3.4."
            },
            {
                "text": "The final decision concerns the instructions, which the user may edit from a default. MiniJudgeJS requires that judgments be entered as 1 (yes) vs. 0 (no); in the current version, if surveys are to be collected electronically, these judgments must be typed before each sentence ID number. Chinese instructions for the VOmen experiment were written with the help of Ko Yu-guang. Each survey starts with the instructions, followed by a speaker ID number (e.g., \"##02\"), and finally the survey itself, with each sentence numbered in the order seen by the speaker. Because the speakers' surveys intentionally hide the factorial design, the experimenter must save this information separately in a schematic survey file. This file is meant to be read only by MiniJudgeJS; as an example, the first line of the schematic survey file for the present experiment is explained in Table 6 . the first three lines of the data file for the VOmen experiment are shown in Table 7 . However, since R is a command-line program, and its outputs can be unintelligible without statistical training, MiniJudgeJS handles the interface with it. The user merely enters the name of the data file, decides whether or not to test for syntactic satiation (explained below in section 3.5.2), and pastes the code generated by MiniJudgeJS into the R window. After the last line has been processed by R, the code either will generate a warning (that the file was not found or was not formatted correctly), or if all went well, will display a simple interpretive summary report. A much more detailed technical report is also saved automatically; this report is explained, step by step for the novice user, in",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 868,
                        "end": 875,
                        "text": "Table 6",
                        "ref_id": "TABREF5"
                    },
                    {
                        "start": 955,
                        "end": 962,
                        "text": "Table 7",
                        "ref_id": "TABREF6"
                    }
                ],
                "eq_spans": [],
                "section": "Running the experiment",
                "sec_num": "3.4."
            },
            {
                "text": "MJInfo.htm#resultshelp.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Running the experiment",
                "sec_num": "3.4."
            },
            {
                "text": "When the data file containing the 18 completed surveys in the VOmen experiment was analyzed using the R code generated by MiniJudgeJS, the summary report in Figure 1 was produced. The factor VO had a significant negative effect. The factor men had a significant negative effect. Order had a significant negative effect. There were no other significant effects.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 157,
                        "end": 165,
                        "text": "Figure 1",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "A null result?",
                "sec_num": "3.5.1"
            },
            {
                "text": "The above results do not take cross-item variability into account because no confound between items and factors was detected (p > .2). Although He (2004) makes no predictions relating to satiation, the unexpected null result noted in section 3.5.1 suggests that it may be worthwhile trying out a more complex analysis that includes interactions with order. Running this analysis simply involves telling MiniJudgeJS that we want to test for satiation (by clicking a checkbox), and then pasting the generated code into R. Doing this with the VOmen data resulted in the two new lines in Figure 2 being added to the significance summary.",
                "cite_spans": [
                    {
                        "start": 144,
                        "end": 153,
                        "text": "He (2004)",
                        "ref_id": "BIBREF10"
                    }
                ],
                "ref_spans": [
                    {
                        "start": 584,
                        "end": 592,
                        "text": "Figure 2",
                        "ref_id": "FIGREF3"
                    }
                ],
                "eq_spans": [],
                "section": "A null result?",
                "sec_num": "3.5.1"
            },
            {
                "text": "The interaction between VO and men had a significant positive effect. The interaction of VO * men with Order had a significant negative effect (satiation). ",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "A null result?",
                "sec_num": "3.5.1"
            },
            {
                "text": "MiniJudgeJS, as with all future versions in the MiniJudge family, is free and open source. The JavaScript and R code can be modified freely by downloading the HTML file and opening it in a text editor, and both are heavily commented to make them easier to follow. In this section I give overviews of the programming relating to material generation and statistical analysis.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "The inner workings",
                "sec_num": "4."
            },
            {
                "text": "As described in section 3.3, MiniJudgeJS can assist with the generation of additional sentence sets.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Material generation",
                "sec_num": "4.1."
            },
            {
                "text": "This involves two major phases: segmenting the prototype sentences into the largest repeated substrings, and substituting new segments for old segments in the new sentence sets.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Material generation",
                "sec_num": "4.1."
            },
            {
                "text": "The first step is to determine whether the prototype sentences contain any spaces. If they do, words are treated as basic units, and capitalization is removed from the initial word and any sentence-final punctuation mark is also set aside (for adding again later). If there are no spaces (as in Chinese), characters are treated as basic units. Next, the boundaries between prototype sentences are demarcated to indicate that cross-sentence strings can never be segments. The algorithm for determining other segment boundaries requires the creation of a lexicon containing all unique words (or characters) in the prototype corpus. If the algorithm detects that items from the corpus and from the lexicon match only if one of the items is lowercase, this item is recapitalized. Versions of the prototype sentences with \"word-based\" capitalization is later used when old segments are replaced by new ones.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Material generation",
                "sec_num": "4.1."
            },
            {
                "text": "The most crucial step in the segmentation algorithm is to check each word (or character) in the lexicon to determine whether or not it has at least two neighbors on the same side in the corpus. For example, suppose the prototype set consisted of the sentences \"A dog loves the cat. The cat loves a dog.\" The lexical item \"loves\" has two neighbors on the left: \"dog\" and \"cat\". Thus a segment boundary should be inserted to the left of \"loves\" in the corpus. Similarly, the right neighbor of \"loves\" is sometimes \"the\" and sometimes \"a\"; hence \"loves\" will be treated as a whole segment. By contrast, the lexical item \"cat\" always has the same item to its left (once sentence-initial capitalization is removed):",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Material generation",
                "sec_num": "4.1."
            },
            {
                "text": "\"the\". Similarly, the right neighbor of \"the\" is always \"cat\". Thus \"the cat\" will be treated as a segment, and the same logic applies to \"a dog\". The prototype segments are thus \"a dog\", \"loves\", \"the cat\".",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Material generation",
                "sec_num": "4.1."
            },
            {
                "text": "The final phase involves substituting the user-chosen new segments for the prototype segments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Material generation",
                "sec_num": "4.1."
            },
            {
                "text": "This is done using JavaScript's built-in regular expression functions, which only became available with Netscape 4 and Internet Explorer 4.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Material generation",
                "sec_num": "4.1."
            },
            {
                "text": "The statistical analyses conducted by MiniJudgeJS involve several innovations: the use of GLMM, the inclusion of order and interactions with order as factors, the use of JavaScript to communicate with R, the use of R code to extract key values from R's technical output so that a simple report can be generated, and the use of R code to compare by-subject and by-subject-and-item analyses to decide whether the latter is really necessary. In this section I describe each of these innovations in turn.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Statistical analysis",
                "sec_num": "4.2."
            },
            {
                "text": "As explained in section 3. scores, which are reliable only if the number of observations is greater than 50 or so, but in actual practice, 50 judgments are trivial to collect (e.g., 5 speakers judging 10 sentences each). Second, like regression in general, GLMM assumes that the correlation between the dependent and independent variables is not perfect, so it is paradoxically unable to confirm the significance of perfect correlations.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "GLMM",
                "sec_num": "4.3.1"
            },
            {
                "text": "Third, like logistic regression (but unlike ANOVA or ordinary regression), it is impossible to calculate GLMM coefficients and p values perfectly; they can only be estimated. Unfortunately, the best way to estimate GLMM values is extremely complicated and slow, so R uses \"simpler\" yet less accurate estimation methods. Currently, R provides two options for estimating GLMM coefficients: the faster but less accurate penalized quasi-likelihood approximation, and the slower but more accurate Laplacian approximation. MiniJudgeJS uses the latter.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "GLMM",
                "sec_num": "4.3.1"
            },
            {
                "text": "The function in the lme4/Matrix packages used for GLMM is lmer, which can also handle linear mixed-effect modeling (i.e., repeated-measures linear regression). The syntax is illustrated in Figure 3 , which shows the commands used to run the final analyses described above in section 3.5.2.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 189,
                        "end": 197,
                        "text": "Figure 3",
                        "ref_id": "FIGREF4"
                    }
                ],
                "eq_spans": [],
                "section": "GLMM",
                "sec_num": "4.3.1"
            },
            {
                "text": "\"Factor1\" and \"Factor2\" are variables whose values are set in the R code to represent the actual factors.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "GLMM",
                "sec_num": "4.3.1"
            },
            {
                "text": "The use of categorical data is signaled by setting the distribution family to \"binomial\". The name of the loaded data file is arbitrarily called \"minexp\" (for minimalist experiment). The first function treats only subjects as random, while the second function treats both subjects and items as random. The choice to test for satiation or not is determined by the user; based on this choice, JavaScript generates different versions of the R code. The choice to run one-factor or two-factor analyses is determined by the R code itself by counting the number of factors in the data file. Both analyses in Figure 3 are always run, and then compared with another R function described below in 4.3.5. glmm1 = lmer(Judgment ~ Factor1 * Factor2 * Order + (1|Speaker), data = minexp, family = \"binomial\", method = \"Laplace\") glmm2 = lmer(Judgment ~ Factor1 * Factor2 * Order + (1|Speaker) + (1|Sentence), data = minexp, family = \"binomial\", method = \"Laplace\") effect: early judgments (when comparison is impossible) will be different from later judgments. Thus factoring out order effects in the statistics serves roughly the same purpose as counterbalanced lists.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 602,
                        "end": 610,
                        "text": "Figure 3",
                        "ref_id": "FIGREF4"
                    }
                ],
                "eq_spans": [],
                "section": "GLMM",
                "sec_num": "4.3.1"
            },
            {
                "text": "JavaScript is much more powerful than many programmers realize. In fact, a key inspiration for Unfortunately, the necessary statistical programming is quite formidable.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "JavaScript as an R interface",
                "sec_num": "4.3.3"
            },
            {
                "text": "Instead, in MiniJudgeJS the role of JavaScript in the statistical analysis is mainly as a user-friendly GUI. Since the statistics needed for a MiniJudge experiment is highly standardized, very little input is needed from the user, but the potential to use JavaScript to interface with R in more flexible ways is there. This would help fix a major limitation with R, whose command-line interface is quite intimidating for novice users, and whose online help leaves a lot to be desired (cf. Fox, 2005 ).",
                "cite_spans": [
                    {
                        "start": 489,
                        "end": 498,
                        "text": "Fox, 2005",
                        "ref_id": "BIBREF7"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "JavaScript as an R interface",
                "sec_num": "4.3.3"
            },
            {
                "text": "Of course, using JavaScript as an interface has its limitations, the most notable of which are the built-in security constraints that prevent JavaScript from being able to read or write to files, or to communicate directly with other programs. For example, it's impossible to have JavaScript run R in the background, to save users the bother of copying and pasting in R code. This is why we are currently exploring other versions of MiniJudge. One that has made some progress is MiniJudgeJava, written by",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "JavaScript as an R interface",
                "sec_num": "4.3.3"
            },
            {
                "text": "Chen Tsung-ying in Java using its own platform-independent GUI tools. Interfacing with R is likely to remain tricky, however, unless we create something like MiniJudgeR, written in R itself, or figure out how to program GLMM directly in JavaScript.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "JavaScript as an R interface",
                "sec_num": "4.3.3"
            },
            {
                "text": "GLMM is a high-powered statistical tool, unlikely to be used by people who don't already have a strong background in statistics, and so the outputs generated by R are not understandable without such a background. Since MiniJudge is intended for statistical novices, extra programming is needed to translate R output into plain language. For MiniJudgeJS, the most crucial portion of R's output for GLMM is the matrix containing the regression coefficient estimates and p values, like that shown in Figure 4 (from the VOmen experiment, without testing for satiation MiniJudgeJS \"sinks\" lmer's displayed output to an offline file, and then reads this file back in as a string (the offline file becomes the permanent record of the detailed analysis). The string is then searched for the string \"(Intercept)\" which always appears at the upper left of the value matrix. The coefficient is the first value to the right of this, and the p value is the fourth value (skipping \"<\", if any).",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 497,
                        "end": 505,
                        "text": "Figure 4",
                        "ref_id": "FIGREF5"
                    }
                ],
                "eq_spans": [],
                "section": "R code to simplify output",
                "sec_num": "4.3.4"
            },
            {
                "text": "If the p value associated with a factor or interaction is less than 0.05, a summary line is generated that gives the actual factor name and the sign of the estimate, as in Figures 1 and 2 above. The R code generates the summary table counting the number of yes judgments for each category (see Figure 1) directly from the data file itself.",
                "cite_spans": [],
                "ref_spans": [
                    {
                        "start": 172,
                        "end": 187,
                        "text": "Figures 1 and 2",
                        "ref_id": "FIGREF2"
                    },
                    {
                        "start": 294,
                        "end": 303,
                        "text": "Figure 1)",
                        "ref_id": "FIGREF2"
                    }
                ],
                "eq_spans": [],
                "section": "R code to simplify output",
                "sec_num": "4.3.4"
            },
            {
                "text": "MiniJudgeJS runs both by-subject and by-subject-and-item analyses, but it reports only the first in the main summary unless it finds that the more complex analysis is really necessary. This approach differs from standard psycholinguistic practice, where both by-subject and by-item analyses are always run. A commonly cited reason for always running a by-item analysis is that it is required to test for generality across items, just as a by-subject analysis tests for generality across subjects. However, this logic is based on a misinterpretation of Clark (1973) , the paper usually cited as justification. The second problem with the standard justification for performing obligatory by-item analyses, as Raaijmakers et al. (1999) emphasize, is that the advice given in Clark (1973) actually applies only to experiments without matched items, such as an experiment comparing a random set of sentences with transitive verbs (\"eat\" etc) with a random set of sentences with unrelated intransitive verbs (\"sleep\" etc).",
                "cite_spans": [
                    {
                        "start": 552,
                        "end": 564,
                        "text": "Clark (1973)",
                        "ref_id": "BIBREF4"
                    },
                    {
                        "start": 707,
                        "end": 732,
                        "text": "Raaijmakers et al. (1999)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 772,
                        "end": 784,
                        "text": "Clark (1973)",
                        "ref_id": "BIBREF4"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "By-subject and by-item analyses",
                "sec_num": "4.3.5"
            },
            {
                "text": "Such sentences will differ in more than just the crucial factor (transitive vs. intransitive), so even if a difference in judgments is found, it may actually relate to uninteresting confounded properties (e.g., the lexical frequency of the verbs). However, if lexically matched items are used, as in the VOmen experiment, there is no such confound, since items within each set differ only in terms of the experimental factor(s). If items are sufficiently well matched, taking cross-item variation into account won't make any difference in the analysis (except to make it much more complicated), but if they are not well matched, ignoring the cross-item variation will result in misleadingly low p values.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "By-subject and by-item analyses",
                "sec_num": "4.3.5"
            },
            {
                "text": "Nevertheless, if we only computed models that take cross-item variation into account, we might lose useful information. After all, a high p value does not necessarily mean that there is no pattern at all, only that we have failed to detect it. Thus it may be useful to know if a by-speaker analysis is significant even if the by-sentence analysis is not. Such an outcome could mean that the significant byspeaker result is an illusion due to an uninteresting lexical confound, but it could instead mean that if we do a better job matching the items in our next experiment, we will be able to demonstrate the validity of our theoretically interesting factor. Thus MiniJudge runs both types of analyses, and only chooses the by-subjects-and-items analysis for the main report if a statistically significant confound between factors and items is detected. The full results of both analyses are saved in an off-line file, along with the results of the statistical comparison of them.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "By-subject and by-item analyses",
                "sec_num": "4.3.5"
            },
            {
                "text": "The R language makes it quite easy to perform this comparison. The model in which only speakers are treated as random is a special case of the model in which both speakers and sentences are treated as random. This means the two GLMM models can be compared by a likelihood ratio test using ANOVA (see Pinheiro & Bates, 2000) . As with the output of the lmer function, the output of the lme4",
                "cite_spans": [
                    {
                        "start": 300,
                        "end": 323,
                        "text": "Pinheiro & Bates, 2000)",
                        "ref_id": "BIBREF17"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "By-subject and by-item analyses",
                "sec_num": "4.3.5"
            },
            {
                "text": "package's anova function makes it difficult to extract p values, so again the output is \"sunk\" to the offline analysis file to be read back in as a string. Only if the p value is below 0.05 is the more complex model taken as significantly better. If the p value is above 0.2, MiniJudgeJS assumes that items and factors are not confounded and reports only the by-subjects-only analysis in the main summary.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "By-subject and by-item analyses",
                "sec_num": "4.3.5"
            },
            {
                "text": "Nevertheless, MiniJudgeJS, erring on the side of caution, gives a warning if 0.2 > p > 0.05. In any case, both GLMM analyses are available for inspection in the offline analysis file. Each analysis also includes additional information, generated by lmer, that may help determine which one is really more reliable, including variance of the random variables and the estimated scale (compared with 1); these details are explained in MJInfo.htm#resultshelp.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "By-subject and by-item analyses",
                "sec_num": "4.3.5"
            },
            {
                "text": "In the case of the VOmen experiment, the comparison of the two models showed that the bysubjects-only model was sufficient (p = 1). This is unsurprising, given that the materials were almost perfectly matched, and that the by-items table showed no outliers among the sentence judgments.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "By-subject and by-item analyses",
                "sec_num": "4.3.5"
            },
            {
                "text": "The final problem with the standard justification for automatic by-item analyses is one that even Raaijmakers et al. (1999) fail to point out. Namely, since repeated-measures regression models make it possible to take cross-speaker and cross-sentence variation into account at the same time, without throwing away any data, they are superior to standard models like ANOVA. To learn more about how advances in statistics have made some psycholinguistic traditions obsolete, see Baayen (2004) .",
                "cite_spans": [
                    {
                        "start": 98,
                        "end": 123,
                        "text": "Raaijmakers et al. (1999)",
                        "ref_id": "BIBREF19"
                    },
                    {
                        "start": 477,
                        "end": 490,
                        "text": "Baayen (2004)",
                        "ref_id": "BIBREF1"
                    }
                ],
                "ref_spans": [],
                "eq_spans": [],
                "section": "By-subject and by-item analyses",
                "sec_num": "4.3.5"
            },
            {
                "text": "MiniJudge, currently implemented only in the form of MiniJudgeJS, is software for theoretical syntacticians without any experimental training who want to collect and interpret judgments quickly and reliably. Though MiniJudgeJS is limited in some ways, in particular in how it interfaces with R, it is still quite easy to use, as testing by my students has demonstrated. Moreover, it is unique, offering syntacticians power that they cannot obtain any other way. Behind this power are original programming and statistical techniques. Finally, MiniJudgeJS is an entirely free, open-source program (as will be all future versions). Anyone interested is invited to try it out, save it for use offline, and contribute to its further development.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Conclusions",
                "sec_num": "5."
            }
        ],
        "back_matter": [
            {
                "text": "Pezzullo and Harald Baayen also provided helpful information on programming and statistical matters.Of course I am solely responsible for any mistakes.",
                "cite_spans": [],
                "ref_spans": [],
                "eq_spans": [],
                "section": "Acknowledgements",
                "sec_num": "6."
            }
        ],
        "bib_entries": {
            "BIBREF0": {
                "ref_id": "b0",
                "title": "Random-effects Modeling of Categorical Response Data",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Agresti",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "G"
                        ],
                        "last": "Booth",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "P"
                        ],
                        "last": "Hobert",
                        "suffix": ""
                    },
                    {
                        "first": "&",
                        "middle": [
                            "B"
                        ],
                        "last": "Caffo",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Sociological Methodology",
                "volume": "30",
                "issue": "",
                "pages": "27--80",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. Agresti, J. G. Booth, J. P. Hobert, & B. Caffo, \"Random-effects Modeling of Categorical Response Data,\" Sociological Methodology, Vol. 30, pp. 27-80, 2000.",
                "links": null
            },
            "BIBREF1": {
                "ref_id": "b1",
                "title": "Statistics in Psycholinguistics: A Critique of Some Current Gold standards",
                "authors": [
                    {
                        "first": "R",
                        "middle": [
                            "H"
                        ],
                        "last": "Baayen",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Mental Lexicon Working Papers",
                "volume": "1",
                "issue": "",
                "pages": "1--45",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "R. H. Baayen, \"Statistics in Psycholinguistics: A Critique of Some Current Gold standards,\" Mental Lexicon Working Papers, Vol. 1, University of Alberta, Canada, 2004, pp. 1-45. www.mpi.nl/world/persons/private/baayen/submitted/statistics.pdf",
                "links": null
            },
            "BIBREF2": {
                "ref_id": "b2",
                "title": "The Interpretation of Least Squares Regression with Interaction or Polynomial Terms",
                "authors": [
                    {
                        "first": "I",
                        "middle": [
                            "S"
                        ],
                        "last": "Bernhardt & B",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Jung",
                        "suffix": ""
                    }
                ],
                "year": 1979,
                "venue": "The Review of Economics and Statistics",
                "volume": "61",
                "issue": "3",
                "pages": "481--483",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "I. Bernhardt & B. S. Jung, \"The Interpretation of Least Squares Regression with Interaction or Polynomial Terms,\" The Review of Economics and Statistics, Vol. 61, No. 3, 1979, pp. 481-483.",
                "links": null
            },
            "BIBREF3": {
                "ref_id": "b3",
                "title": "Statistical Models in S",
                "authors": [
                    {
                        "first": "J",
                        "middle": [
                            "M J"
                        ],
                        "last": "Chambers & T",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Hastie",
                        "suffix": ""
                    }
                ],
                "year": 1993,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. M. Chambers & T. J. Hastie, Statistical Models in S, Chapman & Hall, 1993.",
                "links": null
            },
            "BIBREF4": {
                "ref_id": "b4",
                "title": "The Language-as-fixed-effect Fallacy: A Critique of Language Statistics in Psychological Research",
                "authors": [
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Clark",
                        "suffix": ""
                    }
                ],
                "year": 1973,
                "venue": "Journal of Verbal Learning and Verbal Behavior",
                "volume": "12",
                "issue": "",
                "pages": "335--359",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "H. Clark, \"The Language-as-fixed-effect Fallacy: A Critique of Language Statistics in Psychological Research,\" Journal of Verbal Learning and Verbal Behavior, Vol. 12, pp. 335-359, 1973.",
                "links": null
            },
            "BIBREF5": {
                "ref_id": "b5",
                "title": "Experimental Syntax: Applying Objective Methods to Sentence Judgments",
                "authors": [
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Cowart",
                        "suffix": ""
                    }
                ],
                "year": 1997,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "W. Cowart, Experimental Syntax: Applying Objective Methods to Sentence Judgments. Sage Publications, London, 1997.",
                "links": null
            },
            "BIBREF6": {
                "ref_id": "b6",
                "title": "Magnitude Estimation and What It Can Do for Your Syntax: Some whconstraints in German",
                "authors": [
                    {
                        "first": "S",
                        "middle": [],
                        "last": "Featherston",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Lingua",
                "volume": "115",
                "issue": "11",
                "pages": "1525--1550",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "S. Featherston, \"Magnitude Estimation and What It Can Do for Your Syntax: Some wh- constraints in German,\" Lingua, Vol. 115, No. 11, pp. 1525-1550, 2005.",
                "links": null
            },
            "BIBREF7": {
                "ref_id": "b7",
                "title": "The R Commander: A Basic-statistics Graphical User Interface to R",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Fox",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Journal of Statistical Software",
                "volume": "14",
                "issue": "9",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. Fox, \"The R Commander: A Basic-statistics Graphical User Interface to R,\" Journal of Statistical Software, Vol. 14, No. 9, 2005.",
                "links": null
            },
            "BIBREF8": {
                "ref_id": "b8",
                "title": "On the Syntax and Processing of wh-questions in Spanish",
                "authors": [
                    {
                        "first": "G",
                        "middle": [],
                        "last": "",
                        "suffix": ""
                    }
                ],
                "year": null,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "G. Goodall, \"On the Syntax and Processing of wh-questions in Spanish,\" in WCCFL 23",
                "links": null
            },
            "BIBREF10": {
                "ref_id": "b10",
                "title": "The Words-and-rules Theory: Evidence from Chinese Morphology",
                "authors": [
                    {
                        "first": "Y",
                        "middle": [],
                        "last": "He",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Taiwan Journal of Linguistics",
                "volume": "2",
                "issue": "2",
                "pages": "1--26",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "Y. He, \"The Words-and-rules Theory: Evidence from Chinese Morphology,\" Taiwan Journal of Linguistics, Vol. 2, No. 2, pp. 1-26, 2004.",
                "links": null
            },
            "BIBREF11": {
                "ref_id": "b11",
                "title": "Accessing Linguistic Competence: Evidence from Children's and Adults' Acceptability Judgements. Doctoral dissertation",
                "authors": [
                    {
                        "first": "K",
                        "middle": [],
                        "last": "Hiramatsu",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "K. Hiramatsu, Accessing Linguistic Competence: Evidence from Children's and Adults' Acceptability Judgements. Doctoral dissertation, University of Connecticut, Storrs, 2000.",
                "links": null
            },
            "BIBREF12": {
                "ref_id": "b12",
                "title": "When Intuitions Fail",
                "authors": [
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Labov",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "CLS 32: Papers from the Parasession on Theory and Data in Linguistics",
                "volume": "",
                "issue": "",
                "pages": "77--105",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "W. Labov, \"When Intuitions Fail,\" in CLS 32: Papers from the Parasession on Theory and Data in Linguistics, L. McNair (ed), University of Chicago, pp. 77-105, 1996.",
                "links": null
            },
            "BIBREF13": {
                "ref_id": "b13",
                "title": "Probabilistic Sociolinguistics: Beyond Variable Rules",
                "authors": [
                    {
                        "first": "N",
                        "middle": [],
                        "last": "Mendoza-Denton",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Hay",
                        "suffix": ""
                    },
                    {
                        "first": "&",
                        "middle": [
                            "S"
                        ],
                        "last": "Jannedy",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "97--138",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "N. Mendoza-Denton, J. Hay, & S. Jannedy, \"Probabilistic Sociolinguistics: Beyond Variable Rules,\" in Probabilistic linguistics, R. Bod, J. Hay, & S. Jannedy (eds), MIT Press, Cambridge, MA, pp. 97-138, 2003.",
                "links": null
            },
            "BIBREF14": {
                "ref_id": "b14",
                "title": "An Experiment in Minimalist Experimental Syntax",
                "authors": [
                    {
                        "first": "J",
                        "middle": [],
                        "last": "Myers",
                        "suffix": ""
                    }
                ],
                "year": 2006,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. Myers, \"An Experiment in Minimalist Experimental Syntax,\" National Chung Cheng University ms. Submitted, 2006.",
                "links": null
            },
            "BIBREF15": {
                "ref_id": "b15",
                "title": "What Counts as Evidence in Linguistics? An Introduction",
                "authors": [
                    {
                        "first": "M",
                        "middle": [],
                        "last": "Penke",
                        "suffix": ""
                    },
                    {
                        "first": "& A",
                        "middle": [],
                        "last": "Rosenbach",
                        "suffix": ""
                    }
                ],
                "year": 2004,
                "venue": "Studies in Language",
                "volume": "28",
                "issue": "3",
                "pages": "480--526",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "M. Penke & A. Rosenbach, \"What Counts as Evidence in Linguistics? An Introduction,\" Studies in Language, Vol. 28, No. 3, pp. 480-526, 2004.",
                "links": null
            },
            "BIBREF16": {
                "ref_id": "b16",
                "title": "Linguistics and Empirical Evidence: Reply to Edelman and Christiansen",
                "authors": [
                    {
                        "first": "C",
                        "middle": [],
                        "last": "Phillips&",
                        "suffix": ""
                    },
                    {
                        "first": "H",
                        "middle": [],
                        "last": "Lasnik",
                        "suffix": ""
                    }
                ],
                "year": 2003,
                "venue": "Trends in Cognitive Science",
                "volume": "7",
                "issue": "2",
                "pages": "61--62",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "C. Phillips& H. Lasnik, \"Linguistics and Empirical Evidence: Reply to Edelman and Christiansen,\" Trends in Cognitive Science, Vol. 7, No. 2, pp. 61-62, 2003.",
                "links": null
            },
            "BIBREF17": {
                "ref_id": "b17",
                "title": "Mixed-Effects Models in S and S-Plus",
                "authors": [
                    {
                        "first": "J",
                        "middle": [
                            "C M"
                        ],
                        "last": "Pinheiro & D",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Bates",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. C. Pinheiro & D. M. Bates, Mixed-Effects Models in S and S-Plus. Springer, Berlin, 2000.",
                "links": null
            },
            "BIBREF18": {
                "ref_id": "b18",
                "title": "R: A Language and Environment for Statistical Computing",
                "authors": [
                    {
                        "first": "R",
                        "middle": [],
                        "last": "Development Core",
                        "suffix": ""
                    },
                    {
                        "first": "",
                        "middle": [],
                        "last": "Team",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "R Foundation for Statistical Computing",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "R Development Core Team. \"R: A Language and Environment for Statistical Computing,\" R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0, 2005, URL http://www.R-project.org.",
                "links": null
            },
            "BIBREF19": {
                "ref_id": "b19",
                "title": "How to Deal with 'the Language-as-fixed-effect Fallacy': Common Misconceptions and Alternative Solutions",
                "authors": [
                    {
                        "first": "J",
                        "middle": [
                            "G W"
                        ],
                        "last": "Raaijmakers",
                        "suffix": ""
                    },
                    {
                        "first": "J",
                        "middle": [
                            "M C"
                        ],
                        "last": "Schrijnemakers",
                        "suffix": ""
                    },
                    {
                        "first": "&",
                        "middle": [
                            "F"
                        ],
                        "last": "Gremmen",
                        "suffix": ""
                    }
                ],
                "year": 1999,
                "venue": "Journal of Memory and Language",
                "volume": "41",
                "issue": "",
                "pages": "416--426",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "J. G. W. Raaijmakers, J. M. C. Schrijnemakers, & F. Gremmen, \"How to Deal with 'the Language-as-fixed-effect Fallacy': Common Misconceptions and Alternative Solutions,\" Journal of Memory and Language, Vol. 41, pp. 416-426, 1999.",
                "links": null
            },
            "BIBREF20": {
                "ref_id": "b20",
                "title": "The Empirical Base of Linguistics: Grammaticality Judgments and Linguistic Methodology",
                "authors": [
                    {
                        "first": "C",
                        "middle": [
                            "T"
                        ],
                        "last": "Sch\u00fctze",
                        "suffix": ""
                    }
                ],
                "year": 1996,
                "venue": "",
                "volume": "",
                "issue": "",
                "pages": "",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "C. T. Sch\u00fctze, The Empirical Base of Linguistics: Grammaticality Judgments and Linguistic Methodology. University of Chicago Press, Chicago, 1996.",
                "links": null
            },
            "BIBREF21": {
                "ref_id": "b21",
                "title": "An Experimental Investigation of Syntactic Satiation Effects",
                "authors": [
                    {
                        "first": "W",
                        "middle": [],
                        "last": "Snyder",
                        "suffix": ""
                    }
                ],
                "year": 2000,
                "venue": "Linguistic Inquiry",
                "volume": "31",
                "issue": "",
                "pages": "575--582",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "W. Snyder, \"An Experimental Investigation of Syntactic Satiation Effects,\" Linguistic Inquiry, Vol. 31, pp. 575-582, 2000.",
                "links": null
            },
            "BIBREF22": {
                "ref_id": "b22",
                "title": "Gradience in Linguistic Data",
                "authors": [
                    {
                        "first": "A",
                        "middle": [],
                        "last": "Sorace",
                        "suffix": ""
                    },
                    {
                        "first": "& F",
                        "middle": [],
                        "last": "Keller",
                        "suffix": ""
                    }
                ],
                "year": 2005,
                "venue": "Lingua",
                "volume": "115",
                "issue": "",
                "pages": "1497--1524",
                "other_ids": {},
                "num": null,
                "urls": [],
                "raw_text": "A. Sorace & F. Keller, \"Gradience in Linguistic Data,\" Lingua, Vol. 115, pp. 1497-1524, 2005.",
                "links": null
            }
        },
        "ref_entries": {
            "FIGREF0": {
                "text": "This is where MiniJudge comes in. MiniJudge (www.ccunix.ccu.edu.tw/~lngproc/MiniJudge.htm) is software to help theoretical syntacticians design, run, and analyze linguistic judgment experiments quickly and painlessly. Because MiniJudge experiments involve testing the minimum number of speakers and sentences in the shortest amount of time, all while sacrificing the least amount of statistical power, I call them \"minimalist\" experiments. In this paper I first argue why a tool like MiniJudge is necessary. I then walk through a sample MiniJudge experiment on Chinese. Finally, I",
                "num": null,
                "uris": null,
                "type_str": "figure"
            },
            "FIGREF1": {
                "text": "Surveys themselves are randomized individually to prevent order confounds, as is standard in psycholinguistics. The randomization algorithm, taken from Cowart (1997:101), results in every sentence having an equal chance to appear at any point in the experiment (by randomization of blocks), while simultaneously distributing sentence types evenly and randomly.",
                "num": null,
                "uris": null,
                "type_str": "figure"
            },
            "FIGREF2": {
                "text": "Default results summary generated by MiniJudgeJS for the VOmen experiment",
                "num": null,
                "uris": null,
                "type_str": "figure"
            },
            "FIGREF3": {
                "text": "New lines in results summary when satiation was tested in the VOmen experiment As hoped, factoring out the interactions with order revealed a significant interaction between the factors [VO] and[men]. This shows that the ratio difference seen inFigure 1is indeed statistically reliable (the detailed report file shows p = 0.02), thus vindicating He's empirical claim. This new analysis also detected satiation in the VOmen effect; it was this interaction with order that had obscured evidence for the VOmen effect in the default analysis.This experiment thus not only provided reliable evidence in favor of the empirical claim made byHe (2004), but it also revealed three additional patterns not reported by He: overall lower acceptability for VO forms relative to OV forms, overall lower acceptability of men forms, and the satiability of the VOmen effect. Detecting satiation, and the VOmen effect it obscured, depended crucially on the use of careful experimental design and statistical analysis, and would have been impossible using traditional informal methods. Despite this power, the MiniJudge experiment was designed, run, and analyzed within a matter of days, rather than the weeks required for full-fledged experimental syntax.",
                "num": null,
                "uris": null,
                "type_str": "figure"
            },
            "FIGREF4": {
                "text": "R commands for computing GLMM when testing satiation in a two-factor experiment4.3.2 Order as a factorMiniJudgeJS includes order as a factor whether or not the user tests for satiation, to compensate for the fact that MiniJudge experiments use no counterbalanced lists of sentences across subgroups of speakers.List counterbalancing is used in full-fledged experimental syntax so that speakers don't use an explicit comparison strategy when judging sentences from the same set (a comparison strategy may create an illusory contrast or have other undesirable consequences). However, comparison can only occur when the second sentence of a matched pair is encountered. If roughly half of the speakers get sentence type [+F] first and half get [-F] first, then on average, judgments for [+F] vs. [-F] are only partially influenced by a comparison strategy. The comparison strategy (if any) will be realized as an order",
                "num": null,
                "uris": null,
                "type_str": "figure"
            },
            "FIGREF5": {
                "text": "GLMM output generated by lmer for the VOmen experiment without testing for satiation Unfortunately, the output of the lmer function is a list object, containing only the parameters used to compute the estimates and p values, not the values themselves. Thus the R code generated by",
                "num": null,
                "uris": null,
                "type_str": "figure"
            },
            "TABREF0": {
                "type_str": "table",
                "content": "<table><tr><td>Binary yes/no judgments</td><td>No counterbalancing of sentence lists</td></tr><tr><td>Experimental sentences only (no fillers)</td><td/></tr></table>",
                "html": null,
                "num": null,
                "text": "Defining characteristics of minimalist experimental syntax"
            },
            "TABREF1": {
                "type_str": "table",
                "content": "<table><tr><td colspan=\"2\">The VOmen paradigm of He (2004)</td></tr><tr><td>[+men]</td><td>[-men]</td></tr><tr><td>[+VO] *zhizao yaoyan zhe men</td><td>zhizao yaoyan zhe</td></tr><tr><td>make rumor person PLURAL</td><td>make rumor person</td></tr><tr><td>[-VO] yaoyan zhizao zhe men</td><td>yaoyan zhizao zhe</td></tr><tr><td>rumor make person PLURAL</td><td>rumor make person</td></tr><tr><td colspan=\"2\">Some speakers shown He's starred and non-starred examples are willing to agree with his judgments,</td></tr><tr><td colspan=\"2\">but it's likely that the star pattern has biased them. It may also be that He's generalization works for the</td></tr><tr><td colspan=\"2\">few examples he cites, but fails in general. My goal, then, was to use MiniJudge to generate more</td></tr><tr><td>examples to test systematically on native speakers.</td><td/></tr></table>",
                "html": null,
                "num": null,
                "text": ""
            },
            "TABREF2": {
                "type_str": "table",
                "content": "<table><tr><td>I. Design experiment</td><td>II. Run experiment</td><td>III. Analyze experiment</td></tr><tr><td>Choose experimental factors</td><td>Choose number of speakers</td><td>Download and install R</td></tr><tr><td colspan=\"2\">Choose set of prototype sentences Write instructions for speakers</td><td>Enter raw results</td></tr><tr><td colspan=\"2\">Choose number of sentence sets Print or email survey forms</td><td>Generate data file</td></tr><tr><td colspan=\"2\">Segment prototype set (optional) Save schematic survey file</td><td>Save data file</td></tr><tr><td>Replace segments (optional)</td><td/><td>Generate R code</td></tr><tr><td>Save master list of test sentences</td><td/><td>Paste R command code into R</td></tr></table>",
                "html": null,
                "num": null,
                "text": "The steps used by MiniJudge experiment begins by choosing the experimental factors. In the case of the VOmen claim, the paradigm inTable 2is derived via two binary factors: [\u00b1VO] (VO vs. OV) and [\u00b1men] (with or without men suffixation). As noted above, He's observation doesn't relate to each factor separately, but"
            },
            "TABREF3": {
                "type_str": "table",
                "content": "<table><tr><td colspan=\"2\">Table 4. Extending the VOmen paradigm of He (2004)</td></tr><tr><td>[+men]</td><td>[-men]</td></tr><tr><td>[+VO] *chuanbo bingdu yuan men</td><td>chuanbo bingdu yuan</td></tr><tr><td>spread virus person PLURAL</td><td>spread virus person</td></tr><tr><td>[-VO] bingdu chuanbo yuan men</td><td>bingdu chuanbo yuan</td></tr><tr><td>virus spread person PLURAL</td><td>virus spread person</td></tr><tr><td>[+VO] *sheji shipin ren men</td><td>sheji shipin ren</td></tr><tr><td>design ornaments person PLURAL</td><td>design ornaments person</td></tr><tr><td>[-VO] shipin sheji ren men</td><td>shipin sheji ren</td></tr><tr><td>ornaments design person PLURAL</td><td>ornaments design person</td></tr></table>",
                "html": null,
                "num": null,
                "text": "below, regardless of any additional influences from pragmatics, frequency, suffixlikeness (zhe vs. the others), or freeness (ren vs. the others); the stars here represent what He might predict (lexical content for the new sets was chosen with the help of Ko Yu-guang and Zhang Ning)."
            },
            "TABREF4": {
                "type_str": "table",
                "content": "<table><tr><td colspan=\"2\">Set 1 (prototype) segments: zhizao</td><td>yaoyan</td><td>zhe</td><td>men</td></tr><tr><td>Set 2 segments:</td><td>chuanbo</td><td>bingdu</td><td>yuan</td><td>men</td></tr><tr><td>Set 3 segments:</td><td>sheji</td><td>shipin</td><td>ren</td><td>men</td></tr></table>",
                "html": null,
                "num": null,
                "text": "Prototype segments and new segments for the VOmen experiment"
            },
            "TABREF5": {
                "type_str": "table",
                "content": "<table><tr><td>File line:</td><td>01</td><td>20</td><td>05</td><td>01</td><td>-VO</td><td>-men</td></tr><tr><td colspan=\"2\">Explanation: speaker ID</td><td>sentence ID</td><td>set ID</td><td>order in</td><td>value of first</td><td>value of</td></tr><tr><td/><td>number</td><td>number</td><td>number</td><td>survey</td><td>factor</td><td>second factor</td></tr></table>",
                "html": null,
                "num": null,
                "text": "The structure of the schematic survey information file for the VOmen experiment MiniJudgeJS extracts judgments from the surveys and creates a data file in which each row represents a single observation, with IDs for speakers, sentences, and sets, presentation order of sentences, factor values (1 for [+] and -1 for[-]), and judgments. As an example,"
            },
            "TABREF6": {
                "type_str": "table",
                "content": "<table><tr><td colspan=\"3\">Speaker Sentence Set</td><td>Order</td><td>VO</td><td>men</td><td>Judgment</td></tr><tr><td>1</td><td>2 0</td><td>5</td><td>1</td><td>-1</td><td>-1</td><td>1</td></tr><tr><td>1</td><td>4 5</td><td>1 2</td><td>2</td><td>1</td><td>1</td><td>0</td></tr><tr><td colspan=\"2\">3.5. Analyzing the results</td><td/><td/><td/><td/><td/></tr></table>",
                "html": null,
                "num": null,
                "text": "First three lines of data file for the VOmen experiment For novice experimenters, the most intimidating aspect of psycholinguistic research is statistical analysis. MiniJudge employs quite complex statistical methods that are unfamiliar even to most psycholinguists, yet hides them behind a user-friendly interface. Data from a MiniJudge experiment are both categorical and repeated-measures (grouped within speakers). Currently the best available statistical model for repeated-measures categorical data is generalized linear mixed effect modeling (GLMM), which can be thought of as an extension of logistic regression (see e.g. Agresti et al., 2000). GLMM poses serious programming challenges, so MiniJudgeJS passes the job to R, the world's foremost free statistical package (R Development Core Team, 2005). R is an open-source near clone of the proprietary program S (Chambers & Hastie, 1993), and like S, is a full-featured programming language. Its syntax is a mixture of C++ and Matlab, and of course it has a wide variety of built-in statistical functions, including many user-written packages. The specific R package used by MiniJudgeJS for GLMM is lme4 (and its prerequisite package Matrix), authored by Douglas Bates and Deepayan Sarkar, and maintained by Douglas Bates. R has a simple GUI interface, and by default, the Windows version nativizes (e.g., in Chinese Windows, menus and basic messages are in Chinese)."
            },
            "TABREF8": {
                "type_str": "table",
                "content": "<table/>",
                "html": null,
                "num": null,
                "text": "providing a diagnostic for performance effects (a position taken byGoodall 2004). On the other hand, satiability may differ due to differences between the components of competence itself, thus permitting a new grammatical classification tool (a position taken byHiramatsu 2000)."
            },
            "TABREF9": {
                "type_str": "table",
                "content": "<table/>",
                "html": null,
                "num": null,
                "text": "5, generalized linear mixed effect modeling (GLMM) is conceptually akin to logistic regression, which is at the core of the sociolinguistic variable-rule analyzing program VARBRUL and its descendants (Mendoza-Denton et al. 2003), but unlike logistic regression, GLMM regression equations also include random variables (e.g., the speakers); see Agresti et al. (2000). One major advantage of a regression-based approach is that no data are thrown away. Moreover, since each observation is treated as a separate data point, GLMM is usually not affected much by missing data, but only if they are missing non-systematically (this is why participants in MiniJudge experiments are requested to judge all sentences, guessing if they're not sure).Though GLMM is the best statistical model currently available for repeated-measures categorical"
            },
            "TABREF10": {
                "type_str": "table",
                "content": "<table/>",
                "html": null,
                "num": null,
                "text": "JavaScript-enabled HTML file written by John C. Pezzullo. Using only basic platform-universal JavaScript, the page collects data, reformats it, estimates logistic regression coefficients via a highly efficient maximum likelihood estimation algorithm, and generates chi-square values and p values. Thus a JavaScript-only version of MiniJudgeJS is conceivable, without any need to pass work over to R."
            },
            "TABREF11": {
                "type_str": "table",
                "content": "<table><tr><td/><td>Estimate</td><td>Std. Error</td><td>z value</td><td>Pr(&gt;|z|)</td></tr><tr><td>(Intercept)</td><td>-0.0810381</td><td>0.2330613</td><td>-0.3477</td><td>0.728057</td></tr><tr><td>Factor1</td><td>-0.8090969</td><td>0.0886143</td><td>-9.1305</td><td>&lt; 2.2e-16</td></tr><tr><td>Factor2</td><td>-0.9741367</td><td>0.0891447</td><td>-10.9276</td><td>&lt; 2.2e-16</td></tr><tr><td>Order</td><td>-0.0192680</td><td>0.0059976</td><td>-3.2126</td><td>0.001315</td></tr><tr><td colspan=\"2\">Factor1:Factor2 0.0119932</td><td>0.0877194</td><td>0.1367</td><td>0.891250</td></tr></table>",
                "html": null,
                "num": null,
                "text": "). The trick is to extract the estimates (the signs of which provide information about the nature of the pattern) and the p values (which indicate significance) in order to generate a simple summary containing no numbers at all."
            },
            "TABREF12": {
                "type_str": "table",
                "content": "<table/>",
                "html": null,
                "num": null,
                "text": "First, it is wrong to think that by-item analyses check to see if any item behaves atypically (i.e., is an outlier). For parametric models like ANOVA, it is quite possible for a single outlier to cause an illusory significant result, even in a by-item analysis (categorical data analyses like GLMM don't have this weakness). To test for outliers, there's no substitute for checking the individual by-item results manually. MiniJudge helps with this by reporting the by-sentence rates of yes judgments in a table saved as part of the offline analysis file; items with unusually low or high acceptability relative to others of their type stand out clearly. In the case of the VOmen experiment, this table did not seem to show any outliers."
            }
        }
    }
}