File size: 5,151 Bytes
d93fd32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
# Documentation

This contains some *really* quick docs and notes on filtering with SuperWIKI NEXT.

## wikipedia_soup.py

...Is the main class that handles the bulk of the filtering.

Each filter has code documentation to explain what each function generally does. So I'd suggest you to read those instead.

### Usage for wikipedia_soup.py

probably the most important bit.

wikipedia_soup takes in `*.ndjson` files directly from html wikipedia dumps. via the `process-root` command.

*Note: there are 3 publicly exposed commands via typer, `process-root`, `process-folder`, `process-file`*

`process-root` is probably what you want to use. It takes in the following folder structure:

```
dumps <- Input folder for [process-root]
 |-afwiki-NS0-20240420-ENTERPRISE-HTML <- Input folder for [process-folder]
    |-afwiki_namespace_0_0.ndjson <- Input file for [process-file]
    |-afwiki_namespace_0_1.ndjson
    |-afwiki_namespace_0_2.ndjson
    ...
 |-arwiki-NS0-20240420-ENTERPRISE-HTML
    |-arwiki_namespace_0_0.ndjson
    |-arwiki_namespace_0_1.ndjson
    |-arwiki_namespace_0_2.ndjson
    ...
 ... And so on...
```

Downloading and filtering the files is relatively easy.

1. Get a list of http urls (Whichever way you prefer)
2. Download said list (wget, curl, aria2c, etc)
3. Extract tar files into their own folder as shown above
4. Run `process-root` command.
5. Patience.
6. ???
7. Finished!



## wikipedia_template.py

This file contains templates used in Wikipedia articles.

If you do need to update a template, follow these steps:

1. Open the file in your web browser.
2. Paste the following URL, replacing `<ID>` with the relevant Wikidata entry ID.

```
https://www.wikidata.org/w/api.php?action=wbgetentities&ids=<ID>&format=json&props=labels
```

As for the related templates:

- Stubs: `Q4663261`
- Citation needed: `Q7106262`
- Redirect: `Q6042392`

**Note:** For Sections, there are currently no templates available. These must be added manually.

## mediawiki_soup.py

Before the introduction of Hugging Face's Datatrove and for the sake of simpler code development, this module implemented the `MediaWikiSoup` class.

This class processes HTML content into markdown format and performs additional post-processing steps on the resulting markdown.

`MediaWikiSoup` leverages a "filter chain" architecture. You can extend its functionalities by adding filter functions using either `add_markdown_filter` (for markdown processing) or `add_soup_filter` (for BeautifulSoup processing).

## html2markdown.py

Contains a customized markdownify instance. Since this is mainly carried over from 1.5, details on it are a bit hazy.

For `<a>` elements, I only use the text contained. That is to say, I don't include the href.

```html
<a href="//example.com">This is an example</a>
```

Will be md'd into:

```md
This is an example
```

For image elements:

```html
<img src="//example.com" alt="Alt Text"/>
```

Will be md'd into:

```md
Alt Text
```

For `<li>` elements, I'm unsure what was the reason behind it. Now, God/LLM/Model/??? only knows.

## folders2jsonl.py

Is a simple script converting chunked ndjson files into 1 singular file for ease of processing.

# Tools

Extra tools unrelated to main filtering. But used in some shape or form.

## tools/wikipedia_eligablewiki.py

As the title says, it unbiasedly selects groups of wikipedia with high enough content. Refer to `Selection of Wikipedia` for how it was computed.

the stats .json file can be fetched from the source page here: https://commons.wikimedia.org/w/index.php?title=Data:Wikipedia_statistics/data.tab&action=edit

Copy the json in the source into a .json file and you should be good to go.

if you have to filter a list of URLS, it should look like this:

Using mirror.accum.se as mirror:
```txt
https://mirror.accum.se/mirror/wikimedia.org/other/enterprise_html/runs/20240420/amiwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz
https://mirror.accum.se/mirror/wikimedia.org/other/enterprise_html/runs/20240420/amwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz
https://mirror.accum.se/mirror/wikimedia.org/other/enterprise_html/runs/20240420/angwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz
https://mirror.accum.se/mirror/wikimedia.org/other/enterprise_html/runs/20240420/anwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz
```

Or with the official dumps:
```
https://dumps.wikimedia.org/other/enterprise_html/runs/20240420/amiwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz
https://dumps.wikimedia.org/other/enterprise_html/runs/20240420/amwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz
https://dumps.wikimedia.org/other/enterprise_html/runs/20240420/angwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz
https://dumps.wikimedia.org/other/enterprise_html/runs/20240420/anwiki-NS0-20240420-ENTERPRISE-HTML.json.tar.gz
```

## tools/wikipedia_pageview.py

Not used in NEXT, but included. The idea is to accumulate all pageviews and filter each article based on pageviews. While it's a neat idea, I just didn't use it.

## tools/wikipedia_mediaalias.py

Pretty sure it's unfinished. I didn't use it in the end. Similar to pageview. Though someone could improvise and use it.