File size: 5,691 Bytes
8eade86
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79560eb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8eade86
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ca9342e
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
---
annotations_creators: 
- crowdsourced
license: other
language_creators:
- crowdsourced
language:
- code
task_categories:
- text-generation
tags:
- code, swift, native iOS development, curated
size_categories:
- 100K<n<1M
source_datasets: []
pretty_name: iva-swift-codeint-clean
task_ids:
- language-modeling
---

# IVA Swift GitHub Code Dataset

## Dataset Description

This is the curated IVA Swift dataset extracted from GitHub. 
It contains curated Swift files gathered with the purpose to train a code generation model.

The dataset consists of 383380 swift code files from GitHub totaling ~542MB of data. 
The [uncurated](https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint) dataset was created from the public GitHub dataset on Google BiqQuery.

### How to use it

To download the full dataset:

```python
from datasets import load_dataset
dataset = load_dataset('mvasiliniuc/iva-swift-codeint-clean', split='train')
```

```python

from datasets import load_dataset
dataset = load_dataset('mvasiliniuc/iva-swift-codeint-clean', split='train')
print(dataset[723])

#OUTPUT:
{
   "repo_name":"jdkelley/Udacity-OnTheMap-ExampleApps",
   "path":"TheMovieManager-v2/TheMovieManager/BorderedButton.swift",
   "copies":"2",
   "size":"2649",
   "content":"...let phoneBorderedButtonExtraPadding: CGFloat = 14.0\n    \n    var backingColor: UIColor? = nil\n    var highlightedBackingColor: UIColor? = nil\n    \n    // MARK: Initialization\n}",
   "license":"mit",
   "hash":"db1587fd117e9a835f58cf8203d8bf05",
   "line_mean":29.1136363636,
   "line_max":87,
   "alpha_frac":0.6700641752,
   "ratio":5.298,
   "autogenerated":false,
   "config_or_test":false,
   "has_no_keywords":false,
   "has_few_assignments":false
}
```

## Data Structure

### Data Fields

|Field|Type|Description|
|---|---|---|
|repo_name|string|name of the GitHub repository|
|path|string|path of the file in GitHub repository|
|copies|string|number of occurrences in dataset|
|content|string|content of source file|
|size|string|size of the source file in bytes|
|license|string|license of GitHub repository|
|hash|string|Hash of content field.|
|line_mean|number|Mean line length of the content.
|line_max|number|Max line length of the content.
|alpha_frac|number|Fraction between mean and max line length of content.
|ratio|number|Character/token ratio of the file with tokenizer.
|autogenerated|boolean|True if the content is autogenerated by looking for keywords in the first few lines of the file.
|config_or_test|boolean|True if the content is a configuration file or a unit test.
|has_no_keywords|boolean|True if a file has none of the keywords for Swift Programming Language.
|has_few_assignments|boolean|True if file uses symbol '=' less than `minimum` times.

### Instance

```json
{
   "repo_name":"...",
   "path":".../BorderedButton.swift",
   "copies":"2",
   "size":"2649",
   "content":"...",
   "license":"mit",
   "hash":"db1587fd117e9a835f58cf8203d8bf05",
   "line_mean":29.1136363636,
   "line_max":87,
   "alpha_frac":0.6700641752,
   "ratio":5.298,
   "autogenerated":false,
   "config_or_test":false,
   "has_no_keywords":false,
   "has_few_assignments":false
}
```

## Languages

The dataset contains only Swift files.

```json
{
    "Swift": [".swift"]
}
```

## Licenses

Each entry in the dataset contains the associated license. The following is a list of licenses involved and their occurrences.

```json
{
   "agpl-3.0":1695,
   "apache-2.0":85514,
   "artistic-2.0":207,
   "bsd-2-clause":3132,
   "bsd-3-clause":6600,
   "cc0-1.0":1409,
   "epl-1.0":605,
   "gpl-2.0":9374,
   "gpl-3.0":18920,
   "isc":808,
   "lgpl-2.1":1122,
   "lgpl-3.0":3103,
   "mit":240929,
   "mpl-2.0":8181,
   "unlicense":1781
}
```

## Dataset Statistics

```json
{
    "Total size": "~542 MB",
    "Number of files": 383380,
    "Number of files under 500 bytes": 3680,
    "Average file size in bytes": 5942,
}
```

## Curation Process

* Removal of duplication files based on file hash.
* Removal of file templates. File containing the following: `___FILENAME___, ___PACKAGENAME___, ___FILEBASENAME___, ___FILEHEADER___, ___VARIABLE`
* Removal of the files containing the following words in the first 10 lines: `generated, auto-generated", "autogenerated", "automatically generated`
* Removal of the files containing the following words in the first 10 lines with a probability of 0.7: `test", "unit test", "config", "XCTest", "JUnit`
* Removal of file with the rate of alphanumeric characters below 0.3 of the file.
* Removal of near duplication based MinHash and Jaccard similarity.
* Removal of files with mean line length above 100.
* Removal of files without mention of keywords with a probability of 0.7: `struct ", "class ", "for ", "while ", "enum ", "func ", "typealias ", "var ", "let ", "protocol ", "public ", "private ", "internal ", "import "`
* Removal of files that use the assignment operator `=` less than 3 times.
* Removal of files with the ratio between the number of characters and number of tokens after tokenization lower than 1.5.

Curation process is a derivation of the one used in CodeParrot project: https://huggingface.co/codeparrot

## Data Splits
The dataset only contains a train split which is separated into train and valid which can be found here:
* Clean Version Train: https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint-clean-train
* Clean Version Valid: https://huggingface.co/datasets/mvasiliniuc/iva-swift-codeint-clean-valid

# Considerations for Using the Data

The dataset comprises source code from various repositories, potentially containing harmful or biased code, 
along with sensitive information such as passwords or usernames.