File size: 4,804 Bytes
2b65a1d
 
 
cd95606
 
 
2b65a1d
cd95606
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2b65a1d
cd95606
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2b65a1d
cd95606
 
 
 
2b65a1d
 
 
cd95606
 
 
 
 
 
 
2b65a1d
 
 
cd95606
 
2b65a1d
 
cd95606
 
 
 
2b65a1d
 
 
 
497e94f
876f047
 
2b65a1d
 
497e94f
 
 
 
 
 
 
 
 
 
 
 
3e67cc2
497e94f
3e67cc2
497e94f
 
 
 
 
 
 
 
95e3688
 
8cd5b5e
95e3688
 
 
497e94f
 
95e3688
 
 
497e94f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66c9469
 
 
 
 
 
 
 
 
 
 
497e94f
 
 
66c9469
 
 
 
 
 
 
 
 
 
 
 
 
20f144b
 
497e94f
 
 
 
 
7241b46
497e94f
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
---
dataset_info:
  features:
  - name: id
    dtype: uint32
  - name: language
    dtype: string
  - name: url
    dtype: string
  - name: title
    dtype: string
  - name: text_markdown
    dtype: string
  - name: text_html
    dtype: string
  - name: author
    dtype: string
  - name: original_author
    dtype: string
  - name: original_url
    dtype: string
  - name: lead_html
    dtype: string
  - name: lead_markdown
    dtype: string
  - name: type
    dtype: string
  - name: time_published
    dtype: uint64
  - name: statistics
    struct:
    - name: commentsCount
      dtype: uint32
    - name: favoritesCount
      dtype: uint32
    - name: readingCount
      dtype: uint32
    - name: score
      dtype: int32
    - name: votesCount
      dtype: int32
    - name: votesCountPlus
      dtype: int32
    - name: votesCountMinus
      dtype: int32
  - name: labels
    sequence: string
  - name: hubs
    sequence: string
  - name: flows
    sequence: string
  - name: tags
    sequence: string
  - name: reading_time
    dtype: uint32
  - name: format
    dtype: string
  - name: complexity
    dtype: string
  - name: comments
    sequence:
    - name: id
      dtype: uint64
    - name: parent_id
      dtype: uint64
    - name: level
      dtype: uint32
    - name: time_published
      dtype: uint64
    - name: score
      dtype: int32
    - name: votes
      dtype: uint32
    - name: message_html
      dtype: string
    - name: message_markdown
      dtype: string
    - name: author
      dtype: string
    - name: children
      sequence: uint64
  splits:
  - name: train
    num_bytes: 19968161329
    num_examples: 302049
  download_size: 3485570346
  dataset_size: 19968161329
task_categories:
- text-generation
language:
- ru
- en
size_categories:
- 100K<n<1M
---

# Habr dataset

## Table of Contents
- [Table of Contents](#table-of-contents)
- [Description](#description)
- [Usage](#usage)
- [Data Instances](#data-instances)
- [Source Data](#source-data)
- [Personal and Sensitive Information](#personal-and-sensitive-information)

## Description

**Summary:** Dataset of posts and comments from [habr.com](https://habr.com/ru/all/), a Russian collaborative blog about IT, computer science and anything related to the Internet.

**Script:** [create_habr.py](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_habr.py)

**Point of Contact:** [Ilya Gusev](ilya.gusev@phystech.edu)

**Languages:** Russian, English, some programming code.


## Usage

Prerequisites:
```bash
pip install datasets zstandard jsonlines pysimdjson
```

Dataset iteration:
```python
from datasets import load_dataset
dataset = load_dataset('IlyaGusev/habr', split="train", streaming=True)
for example in dataset:
    print(example["text_markdown"])
```

## Data Instances

```
{
  "id": 12730,
  "language": "ru",
  "url": "https://habr.com/ru/post/12730/",
  "text_markdown": "...",
  "text_html": "...",
  "lead_markdown": "...",
  "lead_html": "...",
  "type": "article",
  "labels": [],
  "original_author": null,
  "original_url": null,
  "time_published": 1185962380,
  "author": "...",
  "title": "Хочешь в университет — сделай презентацию",
  "statistics": {
    "commentsCount": 23,
    "favoritesCount": 1,
    "readingCount": 1542,
    "score": 7,
    "votesCount": 15,
    "votesCountPlus": 11,
    "votesCountMinus": 4
  },
  "hubs": [
    "itcompanies"
  ],
  "flows": [
    "popsci"
  ],
  "tags": [
    "PowerPoint",
    "презентация",
    "абитуриенты",
  ],
  "reading_time": 1,
  "format": null,
  "complexity": null,
  "comments": {
    "id": [11653537, 11653541],
    "parent_id": [null, 11653537],
    "level": [0, 1],
    "time_published": [1185963192, 1185967886],
    "score": [-1, 0],
    "votes": [1, 0],
    "message_html": ["...", "..."],
    "author": ["...", "..."],
    "children": [[11653541], []]
  }
}
```

You can use this little helper to unflatten sequences:

```python
def revert_flattening(records):
    fixed_records = []
    for key, values in records.items():
        if not fixed_records:
            fixed_records = [{} for _ in range(len(values))]
        for i, value in enumerate(values):
            fixed_records[i][key] = value
    return fixed_records
```

The original JSONL is already unflattened.


## Source Data

* The data source is the [Habr](https://habr.com/) website.
* API call example: [post 709430](https://habr.com/kek/v2/articles/709430).
* Processing script is [here](https://github.com/IlyaGusev/rulm/blob/master/data_processing/create_habr.py). 

## Personal and Sensitive Information

The dataset is not anonymized, so individuals' names can be found in the dataset. Information about the original authors is included in the dataset where possible.