Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -28,4 +28,20 @@ size_categories:
|
|
28 |
|
29 |
## Summary
|
30 |
|
31 |
-
Extracted subset from [Helsinki-NLP/opus-100](https://huggingface.co/datasets/Helsinki-NLP/opus-100) and reshaped it into 2 columns. Note: noticed some low quality pairs. It would be a good project to classify and select high quality pairs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
|
29 |
## Summary
|
30 |
|
31 |
+
Extracted subset from [Helsinki-NLP/opus-100](https://huggingface.co/datasets/Helsinki-NLP/opus-100) and reshaped it into 2 columns. Note: noticed some low quality pairs. It would be a good project to classify and select high quality pairs.
|
32 |
+
|
33 |
+
## Usage
|
34 |
+
|
35 |
+
```python
|
36 |
+
from datasets import load_dataset
|
37 |
+
|
38 |
+
ds = load_dataset("nazimali/kurdish-english-opus-100", split="train")
|
39 |
+
ds
|
40 |
+
```
|
41 |
+
|
42 |
+
```python
|
43 |
+
Dataset({
|
44 |
+
features: ['english', 'kurdish'],
|
45 |
+
num_rows: 148844
|
46 |
+
})
|
47 |
+
```
|