witiko commited on
Commit
f7cf9ee
1 Parent(s): 4fec9fc

Update README.md

Browse files

Add intrinsic evaluation results

Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -95,14 +95,21 @@ output = model(**encoded_input)
95
 
96
  ## Training data
97
 
98
- The RoBERTa model was fine-tuned on two datasets:
99
 
100
  - [ArXMLiv 2020][5], a dataset consisting of 1,581,037 ArXiv documents.
101
  - [Math StackExchange][6], a dataset of 2,466,080 questions and answers.
102
 
103
  Together theses datasets weight 52GB of text and LaTeX.
104
 
 
 
 
 
 
 
105
  [5]: https://sigmathling.kwarc.info/resources/arxmliv-dataset-2020/
106
  [6]: https://www.cs.rit.edu/~dprl/ARQMath/arqmath-resources.html
107
  [9]: https://github.com/huggingface/transformers/issues/16936
108
  [10]: https://github.com/huggingface/transformers/pull/17119
 
 
95
 
96
  ## Training data
97
 
98
+ Our model was fine-tuned on two datasets:
99
 
100
  - [ArXMLiv 2020][5], a dataset consisting of 1,581,037 ArXiv documents.
101
  - [Math StackExchange][6], a dataset of 2,466,080 questions and answers.
102
 
103
  Together theses datasets weight 52GB of text and LaTeX.
104
 
105
+ ## Intrinsic evaluation results
106
+
107
+ Our model achieves the following intrinsic evaluation results:
108
+
109
+ ![Intrinsic evaluation results of MathBERTa][11]
110
+
111
  [5]: https://sigmathling.kwarc.info/resources/arxmliv-dataset-2020/
112
  [6]: https://www.cs.rit.edu/~dprl/ARQMath/arqmath-resources.html
113
  [9]: https://github.com/huggingface/transformers/issues/16936
114
  [10]: https://github.com/huggingface/transformers/pull/17119
115
+ [11]: https://huggingface.co/witiko/mathberta/resolve/main/learning-curves.png