|
--- |
|
dataset_info: |
|
- config_name: '20231001' |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 2150584347 |
|
num_examples: 1857355 |
|
download_size: 0 |
|
dataset_size: 2150584347 |
|
- config_name: latest |
|
features: |
|
- name: id |
|
dtype: string |
|
- name: title |
|
dtype: string |
|
- name: text |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 2150584347 |
|
num_examples: 1857355 |
|
download_size: 0 |
|
dataset_size: 2150584347 |
|
configs: |
|
- config_name: '20231001' |
|
data_files: |
|
- split: train |
|
path: 20231001/train-* |
|
- config_name: latest |
|
data_files: |
|
- split: train |
|
path: latest/train-* |
|
--- |
|
# Dataset Card for Wikipedia - Portuguese |
|
|
|
|
|
## Dataset Description |
|
|
|
- latest |
|
- 20231001 |
|
|
|
## Usage |
|
```python |
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset('pablo-moreira/wikipedia-pt', 'latest') |
|
#dataset = load_dataset('pablo-moreira/wikipedia-pt', '20231001') |
|
``` |
|
|
|
## Extractor |
|
|
|
Notebook with the code for extracting documents from the Wikipedia dump based on the code from the FastAI NLP introduction course. |
|
|
|
[Notebook](extractor.ipynb) |
|
|
|
- **[Wikipedia dumps](https://dumps.wikimedia.org/)** |
|
- **[A Code-First Intro to Natural Language Processing](https://github.com/fastai/course-nlp)** |
|
- **[Extractor Code](https://github.com/fastai/course-nlp/blob/master/nlputils.py)** |