Datasets:
File size: 1,930 Bytes
1c40986 ab617bf c234147 1c40986 389f753 5dc1311 389f753 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
---
dataset_info:
features:
- name: url
dtype: string
- name: authors
sequence: string
- name: tags
sequence: string
- name: description
dtype: string
- name: likes
dtype: int64
- name: parts
list:
- name: clean_text
dtype: string
- name: date
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: url
dtype: string
- name: part_count
dtype: int64
- name: title
dtype: string
- name: rating
dtype: string
- name: status
dtype: string
- name: direction
dtype: string
- name: category
dtype: string
- name: pairing
dtype: string
splits:
- name: train
num_bytes: 133011363906
num_examples: 1390475
download_size: 68013121186
dataset_size: 133011363906
language:
- ru
pretty_name: Ficbook Refined
tags:
- not-for-all-audiences
- roleplay
task_categories:
- text-generation
size_categories:
- 100K<n<1M
---
# Ficbook dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
## Description
**Summary:** Dataset of 1.4M fan fiction stories from [ficbook.net](https://ficbook.net/). Dataset collection is still in progress.
**Script:** [create_ficbook.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_ficbook.py)
**Point of Contact:** [Ilya Gusev](phoenixilya@gmail.com)
**Languages:** Mostly Russian
## Usage
Dataset iteration:
```python
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/ficbook', split="train", streaming=True)
for example in dataset:
print(example["parts"][0]["clean_text"])
```
## Personal and Sensitive Information
Information about the original authors is included in the dataset where possible. Many stories from the dataset contain NSFW content.
|