p1atdev commited on
Commit
8170002
·
verified ·
1 Parent(s): 075a9d3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +180 -156
README.md CHANGED
@@ -1,89 +1,177 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
 
12
  ## Model Details
13
 
14
  ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
 
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
 
76
  ## Training Details
77
 
78
  ### Training Data
79
 
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
 
 
81
 
82
- [More Information Needed]
83
 
84
  ### Training Procedure
85
 
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
 
88
  #### Preprocessing [optional]
89
 
@@ -92,110 +180,46 @@ Use the code below to get started with the model.
92
 
93
  #### Training Hyperparameters
94
 
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
 
 
 
 
102
 
103
  ## Evaluation
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
 
111
- <!-- This should link to a Dataset Card if possible. -->
112
 
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
 
121
- #### Metrics
122
 
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
 
159
  ### Compute Infrastructure
160
 
161
- [More Information Needed]
162
 
163
  #### Hardware
164
 
165
- [More Information Needed]
166
 
167
  #### Software
168
 
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
200
 
 
201
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
+ license: apache-2.0
4
+ datasets:
5
+ - isek-ai/danbooru-tags-2024
6
+ base_model: p1atdev/dart-v2-moe-base
7
+ tags:
8
+ - trl
9
+ - sft
10
+ - optimum
11
+ - danbooru
12
+ inference: false
13
  ---
14
 
15
+ # Dart (Danbooru Tags Transformer) v2
16
+
17
+ This model is a fine-tuned Dart (Danbooru Tags Transformer) v2 MoE base model that generates danbooru tags.
18
+
19
+ Demo: [🤗 Space with ZERO](https://huggingface.co/spaces/p1atdev/danbooru-tags-transformer-v2)
20
+
21
+ ## Model variants
22
+
23
+ |Name|Architecture|Param size|Type|
24
+ |-|-|-|-|
25
+ |[v2-moe-sft](https://huggingface.co/p1atdev/dart-v2-moe-sft)|Mixtral|166m|SFT|
26
+ |[v2-moe-base](https://huggingface.co/p1atdev/dart-v2-moe-base)|Mixtral|166m|Pretrain|
27
+ |[v2-sft](https://huggingface.co/p1atdev/dart-v2-sft)|Mistral|114m|SFT|
28
+ |[v2-base](https://huggingface.co/p1atdev/dart-v2-base)|Mistral|114m|Pretrain|
29
+ |[v2-vectors](https://huggingface.co/p1atdev/dart-v2-vectors)|Embedding|-|Tag Embedding|
30
+
31
+ ## Usage
32
+
33
+ ### Using 🤗Transformers
34
+
35
+ ```py
36
+ import torch
37
+ from transformers import AutoTokenizer, AutoModelForCausalLM
38
+
39
+ MODEL_NAME = "p1atdev/dart-v2-moe-base"
40
+
41
+ tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
42
+ model = AutoModelForCausalLM.from_pretrained(MODEL_NAME, torch_dtype=torch.bfloat16)
43
+
44
+ prompt = (
45
+ f"<|bos|>"
46
+ f"<copyright>vocaloid</copyright>"
47
+ f"<character>hatsune miku</character>"
48
+ f"<|rating:general|><|aspect_ratio:tall|><|length:long|>"
49
+ f"<general>1girl"
50
+ )
51
+ inputs = tokenizer(prompt, return_tensors="pt").input_ids
52
+
53
+ with torch.no_grad():
54
+ outputs = model.generate(
55
+ inputs,
56
+ do_sample=True,
57
+ temperature=1.0,
58
+ top_p=1.0,
59
+ top_k=100,
60
+ max_new_tokens=128,
61
+ num_beams=1,
62
+ )
63
+
64
+ print(", ".join([tag for tag in tokenizer.batch_decode(outputs[0], skip_special_tokens=True) if tag.strip() != ""]))
65
+ ```
66
+
67
+ ### Using 📦`dartrs` library
68
+
69
+ > [!WARNING]
70
+ > This library is very experimental and there will be breaking changes in the future.
71
+
72
+ [📦`dartrs`](https://github.com/p1atdev/dartrs) is a [🤗`candle`](https://github.com/huggingface/candle) backend inference library for Dart v2 models.
73
+
74
+ ```py
75
+ pip install -U dartrs
76
+ ```
77
+
78
+ ```py
79
+ from dartrs.dartrs import DartTokenizer
80
+ from dartrs.utils import get_generation_config
81
+ from dartrs.v2 import (
82
+ compose_prompt,
83
+ MixtralModel,
84
+ V2Model,
85
+ )
86
+ import time
87
+ import os
88
+
89
+ MODEL_NAME = "p1atdev/dart-v2-moe-base"
90
+
91
+ model = MixtralModel.from_pretrained(MODEL_NAME)
92
+ tokenizer = DartTokenizer.from_pretrained(MODEL_NAME)
93
+
94
+ config = get_generation_config(
95
+ prompt=compose_prompt(
96
+ copyright="vocaloid",
97
+ character="hatsune miku",
98
+ rating="general", # sfw, general, sensitive, nsfw, questionable, explicit
99
+ aspect_ratio="tall", # ultra_wide, wide, square, tall, ultra_tall
100
+ length="medium", # very_short, short, medium, long, very_long
101
+ prompt="1girl, cat ears",
102
+ do_completion=False
103
+ ),
104
+ tokenizer=tokenizer,
105
+ )
106
+
107
+ start = time.time()
108
+ output = model.generate(config)
109
+ end = time.time()
110
+
111
+ print(output)
112
+ print(f"Time taken: {end - start:.2f}s")
113
+ # cowboy shot, detached sleeves, empty eyes, green eyes, green hair, green necktie, hair in own mouth, hair ornament, letterboxed, light frown, long hair, long sleeves, looking to the side, necktie, parted lips, shirt, sleeveless, sleeveless shirt, twintails, wing collar
114
+ # Time taken: 0.26s
115
+ ```
116
+
117
+ ## Prompt Format
118
+
119
+ ```py
120
+ prompt = (
121
+ f"<|bos|>"
122
+ f"<copyright>{copyright_tags_here}</copyright>"
123
+ f"<character>{character_tags_here}</character>"
124
+ f"<|rating:general|><|aspect_ratio:tall|><|length:long|>"
125
+ f"<general>{general_tags_here}"
126
+ )
127
+ ```
128
+
129
+ - Rating tag: `<|rating:sfw|>`, `<|rating:general|>`, `<|rating:sensitive|>`, `nsfw`, `<|rating:questionable|>`, `<|rating:explicit|>`
130
+ - `sfw`: randomly generates tags in `general` or `sensitive` rating categories.
131
+ - `general`: generates tags in `general` rating category.
132
+ - `sensitive`: generates tags in `sensitive` rating category.
133
+ - `nsfw`: randomly generates tags in `questionable` or `explicit` rating categories.
134
+ - `questionable`: generates tags in `questionable` rating category.
135
+ - `explicit`: generates tags in `explicit` rating category.
136
+
137
+ - Aspect ratio tag: `<|aspect_ratio:ultra_wide|>`, `<|aspect_ratio:wide|>`, `<|aspect_ratio:square|>`, `<|aspect_ratio:tall|>`, `<|aspect_ratio:ultra_tall|>`
138
+ - `ultra_wide`: generates tags suits for extremely wide aspect ratio images. (~2:1)
139
+ - `wide`: generates tags suits for wide aspect ratio images. (2:1~9:8)
140
+ - `square`: generates tags suits for square aspect ratio images. (9:8~8:9)
141
+ - `tall`: generates tags suits for tall aspect ratio images. (8:9~1:2)
142
+ - `ultra_tall`: generates tags suits for extremely tall aspect ratio images. (1:2~)
143
+
144
+ - Length tag: `<|length:very_short|>`, `<|length:short|>`, `<|length:medium|>`, `<|length:long|>`, `<|length:very_long|>`
145
+ - `very_short`: totally generates ~10 number of tags.
146
+ - `short`: totally generates ~20 number of tags.
147
+ - `medium`: totally generates ~30 number of tags.
148
+ - `long`: totally generates ~40 number of tags.
149
+ - `very_long`: totally generates 40~ number of tags.
150
 
151
  ## Model Details
152
 
153
  ### Model Description
154
 
155
+ - **Developed by:** Plat
156
+ - **Model type:** Causal language model
157
+ - **Language(s) (NLP):** Danbooru tags
158
+ - **License:** Apache-2.0
159
+ - **Finetuned from model:** [dart-v2-moe-base](https://huggingface.co/p1atdev/dart-v2-moe-base)
160
+ - **Demo:** Available on [🤗 Space](https://huggingface.co/spaces/p1atdev/danbooru-tags-transformer-v2)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
161
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
162
 
163
  ## Training Details
164
 
165
  ### Training Data
166
 
167
+ This model was trained with:
168
+
169
+ - [isek-ai/danbooru-tags-2024](https://huggingface.co/datasets/isek-ai/danbooru-tags-2024/tree/202403-at20240423) with revision `202403-at20240423`: 7M size of danbooru tags dataset since 2005 to 2024/03/31.
170
 
 
171
 
172
  ### Training Procedure
173
 
174
+ TODO
175
 
176
  #### Preprocessing [optional]
177
 
 
180
 
181
  #### Training Hyperparameters
182
 
183
+ The following hyperparameters were used during training:
184
+ - learning_rate: 0.00025
185
+ - train_batch_size: 1024
186
+ - eval_batch_size: 256
187
+ - seed: 42
188
+ - gradient_accumulation_steps: 2
189
+ - total_train_batch_size: 2048
190
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
191
+ - lr_scheduler_type: cosine
192
+ - lr_scheduler_warmup_steps: 1000
193
+ - num_epochs: 4
194
 
195
  ## Evaluation
196
 
197
+ Evaluation has not been done yet and it needs to evaluate.
 
 
 
 
198
 
 
199
 
200
+ #### Model Architecture and Objective
 
 
 
 
 
 
201
 
202
+ The architecture of this model is [Mixtral](https://huggingface.co/docs/transformers/model_doc/mixtral). See details in [config.json](./config.json).
203
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
204
 
205
  ### Compute Infrastructure
206
 
207
+ Private server.
208
 
209
  #### Hardware
210
 
211
+ 8x RTX A6000
212
 
213
  #### Software
214
 
215
+ - Dataset processing: [🤗 Datasets](https://github.com/huggingface/datasets)
216
+ - Training: [🤗 Transformers](https://github.com/huggingface/transformers)
217
+ - SFT: [🤗 TRL](https://github.com/huggingface/trl)
218
+ - Inference library: [📦 dartrs](https://github.com/p1atdev/dartrs)
219
+ - Backend: [🤗 candle](https://github.com/huggingface/candle)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
220
 
221
+ ## Related Projects
222
 
223
+ - [dart-v1](https://huggingface.co/p1atdev/dart-v1): The first version of the Dart model.
224
+ - [KBlueLeaf/DanTagGen](https://huggingface.co/collections/KBlueLeaf/dantaggen-65f82fa9335881a67573556b): The Aspect Ratio tag was inspired by this project.
225
+ - [furusu/danbooru-tag-similarity](https://huggingface.co/spaces/furusu/danbooru-tag-similarity): The idea of clustering tags and its training method was inspired by this project.