AlekseyKorshuk commited on
Commit
ca74b1a
1 Parent(s): d2bcee9

huggingartists

Browse files
README.md CHANGED
@@ -1,76 +1,43 @@
1
  ---
2
- languages:
3
- - en
 
4
  tags:
5
  - huggingartists
6
  - lyrics
 
 
 
 
7
  ---
8
 
9
- # Dataset Card for "huggingartists/ciggy-blacc"
10
-
11
- ## Table of Contents
12
- - [Dataset Description](#dataset-description)
13
- - [Dataset Summary](#dataset-summary)
14
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
15
- - [Languages](#languages)
16
- - [How to use](#how-to-use)
17
- - [Dataset Structure](#dataset-structure)
18
- - [Data Fields](#data-fields)
19
- - [Data Splits](#data-splits)
20
- - [Dataset Creation](#dataset-creation)
21
- - [Curation Rationale](#curation-rationale)
22
- - [Source Data](#source-data)
23
- - [Annotations](#annotations)
24
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
25
- - [Considerations for Using the Data](#considerations-for-using-the-data)
26
- - [Social Impact of Dataset](#social-impact-of-dataset)
27
- - [Discussion of Biases](#discussion-of-biases)
28
- - [Other Known Limitations](#other-known-limitations)
29
- - [Additional Information](#additional-information)
30
- - [Dataset Curators](#dataset-curators)
31
- - [Licensing Information](#licensing-information)
32
- - [Citation Information](#citation-information)
33
- - [About](#about)
34
-
35
- ## Dataset Description
36
-
37
- - **Homepage:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
38
- - **Repository:** [https://github.com/AlekseyKorshuk/huggingartists](https://github.com/AlekseyKorshuk/huggingartists)
39
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
40
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
41
- - **Size of the generated dataset:** 0.175239 MB
42
-
43
-
44
- <div class="inline-flex flex-col" style="line-height: 1.5;">
45
  <div class="flex">
46
- <div style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/7ba8a81d32ea254df43b31447958e85f.500x500x1.png&#39;)">
 
47
  </div>
48
  </div>
49
- <a href="https://huggingface.co/huggingartists/ciggy-blacc">
50
- <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
51
- </a>
52
  <div style="text-align: center; font-size: 16px; font-weight: 800">Ciggy Blacc</div>
53
  <a href="https://genius.com/artists/ciggy-blacc">
54
  <div style="text-align: center; font-size: 14px;">@ciggy-blacc</div>
55
  </a>
56
  </div>
57
 
58
- ### Dataset Summary
59
-
60
- The Lyrics dataset parsed from Genius. This dataset is designed to generate lyrics with HuggingArtists.
61
- Model is available [here](https://huggingface.co/huggingartists/ciggy-blacc).
62
 
63
- ### Supported Tasks and Leaderboards
64
 
65
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
66
 
67
- ### Languages
68
 
69
- en
70
 
71
- ## How to use
72
 
73
- How to load this dataset directly with the datasets library:
 
74
 
75
  ```python
76
  from datasets import load_dataset
@@ -78,116 +45,42 @@ from datasets import load_dataset
78
  dataset = load_dataset("huggingartists/ciggy-blacc")
79
  ```
80
 
81
- ## Dataset Structure
82
 
83
- An example of 'train' looks as follows.
84
- ```
85
- This example was too long and was cropped:
86
-
87
- {
88
- "text": "Look, I was gonna go easy on you\nNot to hurt your feelings\nBut I'm only going to get this one chance\nSomething's wrong, I can feel it..."
89
- }
90
- ```
91
 
92
- ### Data Fields
93
 
94
- The data fields are the same among all splits.
95
 
96
- - `text`: a `string` feature.
97
 
 
98
 
99
- ### Data Splits
100
-
101
- | train |validation|test|
102
- |------:|---------:|---:|
103
- |23| -| -|
104
-
105
- 'Train' can be easily divided into 'train' & 'validation' & 'test' with few lines of code:
106
 
107
  ```python
108
- from datasets import load_dataset, Dataset, DatasetDict
109
- import numpy as np
110
-
111
- datasets = load_dataset("huggingartists/ciggy-blacc")
112
-
113
- train_percentage = 0.9
114
- validation_percentage = 0.07
115
- test_percentage = 0.03
116
-
117
- train, validation, test = np.split(datasets['train']['text'], [int(len(datasets['train']['text'])*train_percentage), int(len(datasets['train']['text'])*(train_percentage + validation_percentage))])
118
-
119
- datasets = DatasetDict(
120
- {
121
- 'train': Dataset.from_dict({'text': list(train)}),
122
- 'validation': Dataset.from_dict({'text': list(validation)}),
123
- 'test': Dataset.from_dict({'text': list(test)})
124
- }
125
- )
126
  ```
127
 
128
- ## Dataset Creation
129
-
130
- ### Curation Rationale
131
-
132
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
133
-
134
- ### Source Data
135
-
136
- #### Initial Data Collection and Normalization
137
-
138
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
139
-
140
- #### Who are the source language producers?
141
-
142
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
143
-
144
- ### Annotations
145
 
146
- #### Annotation process
147
-
148
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
149
-
150
- #### Who are the annotators?
151
-
152
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
153
-
154
- ### Personal and Sensitive Information
155
-
156
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
157
-
158
- ## Considerations for Using the Data
159
-
160
- ### Social Impact of Dataset
161
-
162
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
163
-
164
- ### Discussion of Biases
165
-
166
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
167
-
168
- ### Other Known Limitations
169
-
170
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
171
-
172
- ## Additional Information
173
-
174
- ### Dataset Curators
175
-
176
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
177
-
178
- ### Licensing Information
179
 
180
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
181
 
182
- ### Citation Information
183
 
184
- ```
185
- @InProceedings{huggingartists,
186
- author={Aleksey Korshuk}
187
- year=2022
188
- }
189
- ```
190
 
 
191
 
192
  ## About
193
 
 
1
  ---
2
+ language: en
3
+ datasets:
4
+ - huggingartists/ciggy-blacc
5
  tags:
6
  - huggingartists
7
  - lyrics
8
+ - lm-head
9
+ - causal-lm
10
+ widget:
11
+ - text: "I am"
12
  ---
13
 
14
+ <div class="inline-flex flex-col" style="line-height: 1.5;">
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  <div class="flex">
16
+ <div
17
+ style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://images.genius.com/7ba8a81d32ea254df43b31447958e85f.500x500x1.png&#39;)">
18
  </div>
19
  </div>
20
+ <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
 
 
21
  <div style="text-align: center; font-size: 16px; font-weight: 800">Ciggy Blacc</div>
22
  <a href="https://genius.com/artists/ciggy-blacc">
23
  <div style="text-align: center; font-size: 14px;">@ciggy-blacc</div>
24
  </a>
25
  </div>
26
 
27
+ I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
 
 
 
28
 
29
+ Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
30
 
31
+ ## How does it work?
32
 
33
+ To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
34
 
35
+ ## Training data
36
 
37
+ The model was trained on lyrics from Ciggy Blacc.
38
 
39
+ Dataset is available [here](https://huggingface.co/datasets/huggingartists/ciggy-blacc).
40
+ And can be used with:
41
 
42
  ```python
43
  from datasets import load_dataset
 
45
  dataset = load_dataset("huggingartists/ciggy-blacc")
46
  ```
47
 
48
+ [Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/ei5jqzy8/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
49
 
50
+ ## Training procedure
 
 
 
 
 
 
 
51
 
52
+ The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Ciggy Blacc's lyrics.
53
 
54
+ Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1xsvugxq) for full transparency and reproducibility.
55
 
56
+ At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1xsvugxq/artifacts) is logged and versioned.
57
 
58
+ ## How to use
59
 
60
+ You can use this model directly with a pipeline for text generation:
 
 
 
 
 
 
61
 
62
  ```python
63
+ from transformers import pipeline
64
+ generator = pipeline('text-generation',
65
+ model='huggingartists/ciggy-blacc')
66
+ generator("I am", num_return_sequences=5)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
  ```
68
 
69
+ Or with Transformers library:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
70
 
71
+ ```python
72
+ from transformers import AutoTokenizer, AutoModelWithLMHead
73
+
74
+ tokenizer = AutoTokenizer.from_pretrained("huggingartists/ciggy-blacc")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75
 
76
+ model = AutoModelWithLMHead.from_pretrained("huggingartists/ciggy-blacc")
77
+ ```
78
 
79
+ ## Limitations and bias
80
 
81
+ The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
 
 
 
 
 
82
 
83
+ In addition, the data present in the user's tweets further affects the text generated by the model.
84
 
85
  ## About
86
 
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f7f29a68cb131dacd797d549da7ed984e71bf1c8be2e0140b8d4d07c072eb1c7
3
+ size 987
evaluation.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ {"eval_loss": 4.902784824371338, "eval_runtime": 0.1191, "eval_samples_per_second": 75.591, "eval_steps_per_second": 16.798, "epoch": 5.0}
flax_model.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e15a7e6c11a00aad352e13861146ab43314cdcca901ec28e7b4987f04edf962a
3
+ size 497764120
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
optimizer.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f594e79d305f7d2605f64056f4398459bb23e3092e645af39d7900ffb93d0280
3
+ size 995603825
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c32f496fa6da04fae297c76215ed6020d135536973e34e9977a4e300d3ed06f3
3
+ size 510396521
rng_state.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c452915503e2ba156cbb1f8f70e6ba4b73b59cccca83637e41e71693b29aaf48
3
+ size 14503
scheduler.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c7d79951436f86ff91ab75a46d3883d64e8b38213f98f4990fcf7488329e2d3
3
+ size 623
special_tokens_map.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f50ab5a5a509a1c309d6171f339b196a900dc9c99ad0408ff23bb615fdae7ad
3
+ size 99
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:86f12f9648802a0c45c4b87ef2ab235e9bfdf1a43cc40571291507c81e35c4c3
3
+ size 2107625
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0f7123cb7e1be4e9c2e8770bd3c00f73d75364fb71680d0412d6263e7af2654c
3
+ size 255
trainer_state.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9611481fe5df14ee221c000ca775b317b3f7931ba8e936b75e9914cb40bc283
3
+ size 698
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:064dbbf5562739d669c7c4893b5b81f65048954db9773f39f2597a7cc84a3aa6
3
+ size 3311
vocab.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ba3c3109ff33976c4bd966589c11ee14fcaa1f4c9e5e154c2ed7f99d80709e7
3
+ size 798156