AnReu commited on
Commit
dd4c2d6
1 Parent(s): 71d85fe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -0
README.md CHANGED
@@ -11,8 +11,19 @@ datasets:
11
  # Math-aware ALBERT
12
 
13
  This repository contains our best *base* model for ARQMath 3. It was initialised from ALBERT-base-v2 and further pre-trained on Math StackExchange in three different stages. We also added more LaTeX tokens to the tokenizer to enable a better tokenization of mathematical formulas. This model is not yet fine-tuned on a specific task. If you are looking for the fine-tuned model, please refer to this page: [AnReu/albert-for-arqmath-3](https://huggingface.co/AnReu/albert-for-arqmath-3)
 
 
 
 
 
 
 
 
 
14
  For further details, please read our paper: http://ceur-ws.org/Vol-3180/paper-07.pdf.
15
 
 
 
16
  # Usage
17
 
18
  You can use this model to further fine-tune it on any math-aware task you have in mind, e.g., classification, question-answering, etc. . Please note, that the model in this repository is only pre-trained and not fine-tuned. If you are looking for the fine-tuned model, please refer to this page: [AnReu/albert-for-arqmath-3](https://huggingface.co/AnReu/albert-for-arqmath-3)
 
11
  # Math-aware ALBERT
12
 
13
  This repository contains our best *base* model for ARQMath 3. It was initialised from ALBERT-base-v2 and further pre-trained on Math StackExchange in three different stages. We also added more LaTeX tokens to the tokenizer to enable a better tokenization of mathematical formulas. This model is not yet fine-tuned on a specific task. If you are looking for the fine-tuned model, please refer to this page: [AnReu/albert-for-arqmath-3](https://huggingface.co/AnReu/albert-for-arqmath-3)
14
+
15
+ # Training Details
16
+
17
+ The model was instantiated from ALBERT-base-v2 weights and further pre-trained in three stages using different data for the sentence order prediction. During all three stages, the mask language modelling task was trained simultaneously. In addition, we added around 500 LaTeX tokens to the tokenizer to better cope with mathematical formulas.
18
+
19
+ The image illustrates the three pre-training stages: First, we train on mathematical formulas only. The SOP classifier predicts which segment contains the left hand side of the formula and which one contains the right hand side. This way we model inter-formula-coherence. The second stages models formula-sentence-coherence, i.e., whether the formula comes first in the original document or whether the natural language part comes first. Finally, we add the inter-sentence-coherence stage that is default for ALBERT. In this stage, sentences were split by a sentence separator.
20
+
21
+ ![Image](https://huggingface.co/AnReu/math_albert/resolve/main/Screenshot%202022-09-02%20at%2018.06.04.png)
22
+
23
  For further details, please read our paper: http://ceur-ws.org/Vol-3180/paper-07.pdf.
24
 
25
+
26
+
27
  # Usage
28
 
29
  You can use this model to further fine-tune it on any math-aware task you have in mind, e.g., classification, question-answering, etc. . Please note, that the model in this repository is only pre-trained and not fine-tuned. If you are looking for the fine-tuned model, please refer to this page: [AnReu/albert-for-arqmath-3](https://huggingface.co/AnReu/albert-for-arqmath-3)