Zhangir Azerbayev commited on
Commit
cda48cc
1 Parent(s): 231bd79

updated readme

Browse files
Files changed (2) hide show
  1. README.md +5 -3
  2. utils.py +0 -53
README.md CHANGED
@@ -56,7 +56,7 @@ This dataset contains only the training set of the [MATH dataset](https://github
56
  # Data Preprocessing
57
  This section describes any significant filtering and transformations made to various subsets of the data.
58
 
59
- ### arXiv.math
60
  The arXiv.math dataset is large, heterogeneous, and contains a great deal of noise. We used the following heuristics
61
  when choosing which files from arXiv.math source folders to include in the dataset:
62
  - Keep only files with a `.tex` extension.
@@ -67,7 +67,9 @@ when choosing which files from arXiv.math source folders to include in the datas
67
  - Include only articles in English, as determined by the [langdetect library](https://pypi.org/project/langdetect/). \n",
68
  "\n",
69
  - Exclude files shorter than 280 characters (characters counted after substring removal described below).
 
70
  In addition, we apply the following transformations to arXiv.math texts:
 
71
  - Delete everything outside of `\begin{document}` and `\end{document}`.
72
  - Delete everything including or after `\Refs`, `\begin{thebibliography}`, or `\begin{bibdiv}`
73
  - Delete comments.
@@ -75,7 +77,7 @@ In addition, we apply the following transformations to arXiv.math texts:
75
  In [this notebook](https://github.com/zhangir-azerbayev/proof-pile/blob/main/analysis/arxiv_noisedetection.ipynb), we provide an analysis of the prevalence of noisy documents in the arXiv.math subset of the
76
  proof-pile.
77
 
78
- ### Stack Exchange
79
  We only include questions that have at least 5 upvotes and an answer. We format Stack Exchange posts as follows
80
  ```
81
  QUESTION [{num_upvotes} upvotes]: {text of question}
@@ -89,7 +91,7 @@ REPLY [{num_upvotes} votes]: {text of reply}
89
  .
90
  ```
91
 
92
- ### set.mm
93
  We converted `set.mm` into human-readable form by following the instructions in the [mm-extract repo](https://github.com/zhangir-azerbayev/mm-extract)
94
 
95
  ## Contributions
 
56
  # Data Preprocessing
57
  This section describes any significant filtering and transformations made to various subsets of the data.
58
 
59
+ **arXiv.math**
60
  The arXiv.math dataset is large, heterogeneous, and contains a great deal of noise. We used the following heuristics
61
  when choosing which files from arXiv.math source folders to include in the dataset:
62
  - Keep only files with a `.tex` extension.
 
67
  - Include only articles in English, as determined by the [langdetect library](https://pypi.org/project/langdetect/). \n",
68
  "\n",
69
  - Exclude files shorter than 280 characters (characters counted after substring removal described below).
70
+
71
  In addition, we apply the following transformations to arXiv.math texts:
72
+
73
  - Delete everything outside of `\begin{document}` and `\end{document}`.
74
  - Delete everything including or after `\Refs`, `\begin{thebibliography}`, or `\begin{bibdiv}`
75
  - Delete comments.
 
77
  In [this notebook](https://github.com/zhangir-azerbayev/proof-pile/blob/main/analysis/arxiv_noisedetection.ipynb), we provide an analysis of the prevalence of noisy documents in the arXiv.math subset of the
78
  proof-pile.
79
 
80
+ **Stack Exchange**
81
  We only include questions that have at least 5 upvotes and an answer. We format Stack Exchange posts as follows
82
  ```
83
  QUESTION [{num_upvotes} upvotes]: {text of question}
 
91
  .
92
  ```
93
 
94
+ **set.mm**
95
  We converted `set.mm` into human-readable form by following the instructions in the [mm-extract repo](https://github.com/zhangir-azerbayev/mm-extract)
96
 
97
  ## Contributions
utils.py DELETED
@@ -1,53 +0,0 @@
1
- import os
2
- import tarfile
3
- from itertools import cycle
4
- from shutil import get_terminal_size
5
- from threading import Thread
6
- from time import sleep
7
-
8
- def make_archive(path):
9
- with tarfile.open(path + ".tar.gz", "w:gz") as tar:
10
- tar.add(path, arcname=os.path.sep)
11
- os.system(f"rm -r {path}")
12
-
13
- class Loader:
14
- def __init__(self, desc="Loading...", end="Done!", timeout=0.1):
15
- """
16
- A loader-like context manager
17
-
18
- Args:
19
- desc (str, optional): The loader's description. Defaults to "Loading...".
20
- end (str, optional): Final print. Defaults to "Done!".
21
- timeout (float, optional): Sleep time between prints. Defaults to 0.1.
22
- """
23
- self.desc = desc
24
- self.end = end
25
- self.timeout = timeout
26
-
27
- self._thread = Thread(target=self._animate, daemon=True)
28
- self.steps = ["⢿", "⣻", "⣽", "⣾", "⣷", "⣯", "⣟", "⡿"]
29
- self.done = False
30
-
31
- def start(self):
32
- self._thread.start()
33
- return self
34
-
35
- def _animate(self):
36
- for c in cycle(self.steps):
37
- if self.done:
38
- break
39
- print(f"\r{self.desc} {c}", flush=True, end="")
40
- sleep(self.timeout)
41
-
42
- def __enter__(self):
43
- self.start()
44
-
45
- def stop(self):
46
- self.done = True
47
- cols = get_terminal_size((80, 20)).columns
48
- print("\r" + " " * cols, end="", flush=True)
49
- print(f"\r{self.end}", flush=True)
50
-
51
- def __exit__(self, exc_type, exc_value, tb):
52
- # handle exceptions with those variables ^
53
- self.stop()