nanote commited on
Commit
4114f7d
1 Parent(s): c60ec3d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +136 -3
README.md CHANGED
@@ -1,3 +1,136 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # **MaE: Math Misconceptions and Errors Dataset**
5
+
6
+ This dataset supports the research described in the paper [A Benchmark for Math Misconceptions: Bridging Gaps in Middle School Algebra with AI-Supported Instruction](https://arxiv.org/abs/your-link) by Nancy Otero, Stefania Druga, and Andrew Lan.
7
+
8
+ ### **Overview**
9
+ The **MaE (Math Misconceptions and Errors)** dataset is a collection of 220 diagnostic examples designed by math learning researchers that represent 55 common algebra misconceptions among middle school students. It aims to provide insights into student errors and misconceptions in algebra, supporting the development of AI-enhanced educational tools that can improve math instruction and learning outcomes.
10
+
11
+ ## **Dataset Details**
12
+
13
+ * Total Misconceptions: 55
14
+ * Total Examples: 220
15
+ * Topics Covered:
16
+
17
+ 1. **Number sense** (MaE01-MaE05)
18
+ - Understanding numbers and their relationships
19
+ 2. **Number operations** (MaE06-MaE22)
20
+ - Integer subtraction
21
+ - Fractions and decimal operations
22
+ - Order of operations
23
+ 3. **Ratios and proportional reasoning** (MaE23-MaE28)
24
+ - Ratio concepts
25
+ - Proportional thinking
26
+ - Problem-solving with ratios
27
+ 4. **Properties of numbers and operations** (MaE31-MaE34)
28
+ - Commutative, associative, and distributive properties
29
+ - Algebraic manipulations
30
+ - Order of operations
31
+ 5. **Patterns, relationships, and functions** (MaE35-MaE42)
32
+ - Pattern analysis and generalization
33
+ - Tables, graphs, and symbolic rules
34
+ - Function relationships
35
+ 6. **Algebraic representations** (MaE43-MaE44)
36
+ - Symbolic expressions and graphs
37
+ - Multiple representations
38
+ - Linear equations
39
+ 7. **Variables, expressions, and operations** (MaE45-MaE48)
40
+ - Expression structure
41
+ - Polynomial arithmetic
42
+ - Equation creation and reasoning
43
+ 8. **Equations and inequalities** (MaE49-MaE55)
44
+ - Linear equations and inequalities
45
+ - Proportional relationships
46
+ - Function modeling
47
+
48
+ Each misconception is represented by four diagnostic examples, featuring both correct and incorrect answers. The examples include detailed explanations to highlight the reasoning behind the errors.
49
+
50
+ ## **Data Format**
51
+ The dataset is stored in a JSON format with the following fields:
52
+ * **Misconception:** Description of the misconception.
53
+ * **Misconception ID:** Unique identifier for each misconception.
54
+ * **Topic:** Category of the misconception.
55
+ * **4 Diagnostic Examples,** each containing:
56
+ - Question
57
+ - Incorrect answer demonstrating the misconception
58
+ - Correct answer
59
+ - Source reference
60
+ - Images or graphs (where applicable)
61
+
62
+ ## **Validation**
63
+
64
+ * Dataset tested with GPT-4, achieving 83.9% accuracy when constrained by topic
65
+ * Validated by middle school math educators
66
+ * 80% of surveyed teachers confirmed encountering these misconceptions in their classrooms
67
+
68
+ ## **Intended Use**
69
+ The MaE dataset is designed to:
70
+
71
+ 1. **Support AI development:** AI models can use this dataset to diagnose algebra misconceptions in students' responses.
72
+ 2. **Aid educators:** Teachers can use the dataset to understand common student errors and adjust instruction accordingly.
73
+ 3. **Enhance curriculum design:** By identifying frequent misconceptions, curriculum developers can create targeted interventions to address these learning gaps.
74
+
75
+ # **Experimental Results**
76
+
77
+ The dataset was evaluated using GPT-4 through two main experiments to assess its effectiveness in identifying math misconceptions. GPT-4 parameters in both experiments were temperature=0.2, max_tokens=2000, and frequency_penalty=0.0.
78
+
79
+ ## **Experimental Design**
80
+
81
+ * **Experiment 1 (Cross-Topic Testing):** One example from each misconception was randomly selected as training data, and another example was randomly selected as test data from the entire dataset. This approach tested the model's ability to identify misconceptions across all topics without constraints.
82
+ * **Experiment 2 (Topic-Constrained Testing):** Similar to Experiment 1, but test examples were only selected from within the same topic as the training example (e.g., if training on a "Number Operations" misconception, testing was done only on other "Number Operations" examples). This approach evaluated the model's performance when constrained to specific mathematical domains.
83
+
84
+ Both experiments were repeated 100 times to ensure robust results, and each used the same format where GPT-4 was provided with one example to learn from and then asked to identify misconceptions in new examples.
85
+
86
+ ### **General Performance**
87
+
88
+ * **Experiment 1** (Random selection across all topics):
89
+ - Precision: 0.526
90
+ - Recall: 0.529
91
+ - Overall accuracy: 65.45% (including expert-validated corrections)
92
+
93
+ * **Experiment 2** (Topic-constrained testing):
94
+ - Precision: 0.753
95
+ - Recall: 0.748
96
+ - Overall accuracy: 83.91% (including expert-validated corrections)
97
+
98
+ ![Figure 3](Figure_3.png)
99
+
100
+ ### **Topic-Specific Performance**
101
+ Performance varied significantly across different mathematical topics:
102
+
103
+ * **Highest Performance:**
104
+ - "Algebraic representations" achieved perfect scores (1.0) in topic-constrained testing
105
+ - "Number operations" showed strong results with 0.685 precision and 0.77 recall in general testing
106
+ * **Challenging Areas:**
107
+ - "Ratios and proportional thinking" proved most challenging, with lowest scores:
108
+ + General testing: 0.215 precision, 0.191 recall
109
+ + Topic-constrained testing: 0.286 precision, 0.333 recall
110
+
111
+ ![Figure 4](Figure_4.png)
112
+
113
+ ![Figure 5](Figure_5.png)
114
+
115
+ ## **Expert Validation**
116
+
117
+ Two experienced algebra educators reviewed GPT-4's misconception classifications, particularly focusing on cases where the model's predictions differed from the original dataset labels. The educators agreed on 90.91% of their assessments and resolved disagreements through joint review. Their analysis revealed several important insights:
118
+
119
+ * Some student answers might demonstrated multiple valid misconceptions beyond the original single label
120
+ * Certain misconceptions were found to be subsets of broader misconceptions
121
+ * A portion of GPT-4's apparent "errors" were actually valid alternative classifications
122
+
123
+ This expert validation significantly improved the assessed accuracy of GPT-4:
124
+
125
+ * In Experiment 1: Initial accuracy of 52.96% increased to 65.45%
126
+ * In Experiment 2: Initial accuracy of 73.82% increased to 83.91%
127
+
128
+ ![Figure 6](Figure_6.png)
129
+
130
+ These results demonstrate that:
131
+
132
+ 1. Some mathematical concepts, particularly ratios and proportional thinking, remain challenging for AI to assess
133
+ 2. The model performs best when evaluating misconceptions within their specific topic domains
134
+ 3. Expert validation plays a crucial role in improving accuracy assessments and validating models accuracy
135
+
136
+ The experimental outcomes suggest that while AI can effectively identify many common mathematical misconceptions, its performance is optimized when operating within specific topic constraints and supplemented by expert oversight.