|
--- |
|
license: apache-2.0 |
|
dataset_info: |
|
features: |
|
- name: word |
|
dtype: string |
|
- name: form |
|
dtype: string |
|
- name: sentence |
|
dtype: string |
|
- name: paraphrase |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 480909 |
|
num_examples: 1007 |
|
- name: test |
|
num_bytes: 42006 |
|
num_examples: 77 |
|
download_size: 290128 |
|
dataset_size: 522915 |
|
task_categories: |
|
- text-generation |
|
- text2text-generation |
|
language: |
|
- ru |
|
size_categories: |
|
- 1K<n<10K |
|
--- |
|
|
|
# Dataset Card for Ru Anglicism |
|
|
|
## Table of Contents |
|
- [Table of Contents](#table-of-contents) |
|
- [Dataset Description](#dataset-description) |
|
- [Dataset Summary](#dataset-summary) |
|
- [Languages](#languages) |
|
- [Dataset Structure](#dataset-structure) |
|
- [Data Instances](#data-instances) |
|
- [Data Splits](#data-splits) |
|
|
|
## Dataset Description |
|
|
|
### Dataset Summary |
|
|
|
Dataset for detection and substraction anglicisms from sentences in Russian. Sentences with anglicism automatically parsed from National Corpus of the Russian language, Habr and Pikabu. The paraphrases for the sentences were created manually. |
|
|
|
### Languages |
|
|
|
The dataset is in Russian. |
|
|
|
### Usage |
|
|
|
Loading dataset: |
|
```python |
|
from datasets import load_dataset |
|
dataset = load_dataset('shershen/ru_anglicism') |
|
``` |
|
|
|
## Dataset Structure |
|
|
|
### Data Instunces |
|
|
|
For each instance, there are four strings: word, form, sentence and paraphrase. |
|
|
|
``` |
|
{ |
|
'word': 'коллаб', |
|
'form': 'коллабу', |
|
'sentence': 'Сделаем коллабу, раскрутимся.', |
|
'paraphrase': 'Сделаем совместный проект, раскрутимся.' |
|
} |
|
``` |
|
|
|
### Data Splits |
|
|
|
Full dataset contains 1084 sentences. Split of dataset is: |
|
|
|
| Dataset Split | Number of Rows |
|
|:---------|:---------| |
|
| Train | 1007 | |
|
| Test | 77 | |