eckendoerffer
commited on
Commit
•
94a67f1
1
Parent(s):
8ae2c07
Update README.md
Browse files
README.md
CHANGED
@@ -33,7 +33,7 @@ The dataset is divided into the following splits:
|
|
33 |
- `test.txt` : 192 MB - 100,575 rows - 5%
|
34 |
- `valid.txt`: 192 MB - 100,575 rows - 5%
|
35 |
|
36 |
-
Each article in the dataset exceeds
|
37 |
|
38 |
## Data Cleaning and Preprocessing
|
39 |
|
@@ -55,4 +55,38 @@ You can use the `explore_dataset.py` script to explore the dataset by randomly d
|
|
55 |
|
56 |
## Additional Information
|
57 |
|
58 |
-
This dataset is a subset of a larger 10GB French dataset, which also contains several thousand books and theses in French, as well as several hundred thousand Francophone news articles.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
- `test.txt` : 192 MB - 100,575 rows - 5%
|
34 |
- `valid.txt`: 192 MB - 100,575 rows - 5%
|
35 |
|
36 |
+
Each article in the dataset exceeds 1400 characters in length.
|
37 |
|
38 |
## Data Cleaning and Preprocessing
|
39 |
|
|
|
55 |
|
56 |
## Additional Information
|
57 |
|
58 |
+
This dataset is a subset of a larger 10GB French dataset, which also contains several thousand books and theses in French, as well as several hundred thousand Francophone news articles.
|
59 |
+
|
60 |
+
Certainly! Here's an adapted and translated version for a `README.md` file:
|
61 |
+
|
62 |
+
---
|
63 |
+
|
64 |
+
# WIKIPEDIA EXTRACT
|
65 |
+
|
66 |
+
Inside the `/extract_wiki/` directory, you'll find Python scripts used to extract text to compile this dataset.
|
67 |
+
|
68 |
+
## Requirements:
|
69 |
+
```python
|
70 |
+
pip install datasets aiohttp aiofiles beautifulsoup4 langid
|
71 |
+
```
|
72 |
+
|
73 |
+
## Scripts:
|
74 |
+
|
75 |
+
1. **1_extract_link.py**
|
76 |
+
```python
|
77 |
+
python 1_extract_link.py
|
78 |
+
```
|
79 |
+
Script to download the Wikipedia dataset from Hugging Face, extract URLs, and save them to a text file for further processing.
|
80 |
+
|
81 |
+
2. **2_extract_content.py**
|
82 |
+
```python
|
83 |
+
python 2_extract_content.py
|
84 |
+
```
|
85 |
+
This script retrieves the source code of Wikipedia pages based on URLs found in a text file. Instead of saving the entire HTML of the page, it trims the content, focusing on the main article section, thereby limiting the size of each record.
|
86 |
+
|
87 |
+
3. **3_extract_txt.py**
|
88 |
+
```python
|
89 |
+
python 3_extract_txt.py
|
90 |
+
```
|
91 |
+
This script extracts the text from the HTML pages and conducts tests to filter the content that should be retained or excluded. This includes language checks, special characters, numbers, etc.
|
92 |
+
|