Phương
commited on
Commit
·
67a4819
1
Parent(s):
6397d91
Update README.md
Browse files
README.md
CHANGED
@@ -1,20 +1,120 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
3 |
---
|
4 |
-
## Training procedure
|
5 |
|
|
|
6 |
|
7 |
-
|
8 |
-
|
9 |
-
-
|
10 |
-
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
|
20 |
-
- PEFT 0.4.0
|
|
|
1 |
---
|
2 |
+
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
|
3 |
+
# Doc / guide: https://huggingface.co/docs/hub/model-cards
|
4 |
+
{}
|
5 |
---
|
|
|
6 |
|
7 |
+
# Model Card for Kimiko_13B
|
8 |
|
9 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
10 |
+
|
11 |
+
This is my new Kimiko models, trained with LLaMA2-13B for...purpose
|
12 |
+
|
13 |
+
## Model Details
|
14 |
+
|
15 |
+
### Model Description
|
16 |
+
|
17 |
+
<!-- Provide a longer summary of what this model is. -->
|
18 |
+
|
19 |
+
|
20 |
+
|
21 |
+
- **Developed by:** nRuaif
|
22 |
+
- **Model type:** Decoder only
|
23 |
+
- **License:** CC BY-NC-SA
|
24 |
+
- **Finetuned from model [optional]:** LLaMA 2
|
25 |
+
|
26 |
+
### Model Sources [optional]
|
27 |
+
|
28 |
+
<!-- Provide the basic links for the model. -->
|
29 |
+
|
30 |
+
- **Repository:** https://github.com/OpenAccess-AI-Collective/axolotl
|
31 |
+
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
32 |
+
## Uses
|
33 |
+
|
34 |
+
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
35 |
+
|
36 |
+
|
37 |
+
### Direct Use
|
38 |
+
|
39 |
+
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
40 |
+
|
41 |
+
This model is trained on 3k examples of instructions dataset, high quality roleplay, for best result follow this format
|
42 |
+
```
|
43 |
+
<<HUMAN>>
|
44 |
+
How to do abc
|
45 |
+
|
46 |
+
<<AIBOT>>
|
47 |
+
Here is how
|
48 |
+
|
49 |
+
Or with system prompting for roleplay
|
50 |
+
|
51 |
+
<<SYSTEM>>
|
52 |
+
A's Persona:
|
53 |
+
B's Persona:
|
54 |
+
Scenario:
|
55 |
+
Add some instruction here on how you want your RP to go.
|
56 |
+
```
|
57 |
+
|
58 |
+
|
59 |
+
## Bias, Risks, and Limitations
|
60 |
+
|
61 |
+
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
62 |
+
|
63 |
+
All bias of this model come from LlaMA2 with an exception of NSFW bias.....
|
64 |
+
|
65 |
+
|
66 |
+
|
67 |
+
|
68 |
+
## Training Details
|
69 |
+
|
70 |
+
### Training Data
|
71 |
+
|
72 |
+
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
73 |
+
|
74 |
+
3000 examples from LIMAERP, LIMA and I sample 1000 good instruction from Airboro
|
75 |
+
|
76 |
+
### Training Procedure
|
77 |
+
|
78 |
+
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
79 |
+
|
80 |
+
Model is trained with 1 L4 from GCP costing a whooping 2.5USD
|
81 |
+
|
82 |
+
|
83 |
+
|
84 |
+
|
85 |
+
|
86 |
+
#### Training Hyperparameters
|
87 |
+
|
88 |
+
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
89 |
+
|
90 |
+
3 epochs with 0.0002 lr, full 4096 ctx token, QLoRA
|
91 |
+
|
92 |
+
#### Speeds, Sizes, Times [optional]
|
93 |
+
|
94 |
+
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
95 |
+
|
96 |
+
It takes 18 hours to train this model with xformers enable
|
97 |
+
|
98 |
+
[More Information Needed]
|
99 |
+
|
100 |
+
|
101 |
+
|
102 |
+
|
103 |
+
|
104 |
+
|
105 |
+
|
106 |
+
[More Information Needed]
|
107 |
+
|
108 |
+
## Environmental Impact
|
109 |
+
|
110 |
+
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
111 |
+
|
112 |
+
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
113 |
+
|
114 |
+
- **Hardware Type:** L4 with 12CPUs 48gb ram
|
115 |
+
- **Hours used:** 5
|
116 |
+
- **Cloud Provider:** GCP
|
117 |
+
- **Compute Region:** US
|
118 |
+
- **Carbon Emitted:** 0.5KG which is offset by me turning off PC when training
|
119 |
|
120 |
|
|