Keely0419 commited on
Commit
cb1ddcc
1 Parent(s): b68176e

upload files

Browse files
Files changed (3) hide show
  1. .gitattributes +0 -1
  2. README.md +149 -3
  3. rinna.png +0 -0
.gitattributes CHANGED
@@ -33,7 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
- *.md filter=lfs diff=lfs merge=lfs -text
37
  tokenizer.model filter=lfs diff=lfs merge=lfs -text
38
  model-00001-of-00003.safetensors filter=lfs diff=lfs merge=lfs -text
39
  model-00002-of-00003.safetensors filter=lfs diff=lfs merge=lfs -text
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
36
  tokenizer.model filter=lfs diff=lfs merge=lfs -text
37
  model-00001-of-00003.safetensors filter=lfs diff=lfs merge=lfs -text
38
  model-00002-of-00003.safetensors filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,149 @@
1
- version https://git-lfs.github.com/spec/v1
2
- oid sha256:6a369a24f4f0d4bcaa141d66a9ba3640e359a1e10ccad1c6ed94d7de09892dee
3
- size 4850
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ thumbnail: https://github.com/rinnakk/japanese-pretrained-models/blob/master/rinna.png
3
+ license: gemma
4
+ language:
5
+ - ja
6
+ - en
7
+ tags:
8
+ - gemma2
9
+ - conversational
10
+ base_model:
11
+ - google/gemma-2-2b
12
+ - google/gemma-2-2b-it
13
+ - rinna/gemma-2-baku-2b
14
+ base_model_relation: merge
15
+ ---
16
+
17
+
18
+ # `Gemma 2 Baku 2B Instruct (rinna/gemma-2-baku-2b-it)`
19
+
20
+ ![rinna-icon](./rinna.png)
21
+
22
+ # Overview
23
+
24
+ The model is an instruction-tuned variant of [rinna/gemma-2-baku-2b](https://huggingface.co/rinna/gemma-2-baku-2b), utilizing Chat Vector and Odds Ratio Preference Optimization (ORPO) for fine-tuning. It adheres to the gemma-2 chat format.
25
+
26
+ | Size | Continual Pre-Training | Instruction-Tuning |
27
+ | :- | :- | :- |
28
+ | 2B | Gemma 2 Baku 2B [[HF]](https://huggingface.co/rinna/gemma-2-baku-2b) | Gemma 2 Baku 2B Instruct [[HF]](https://huggingface.co/rinna/gemma-2-baku-2b-it) |
29
+
30
+ * **Model architecture**
31
+
32
+ A 26-layer, 2304-hidden-size transformer-based language model. Please refer to the [Gemma 2 Model Card](https://www.kaggle.com/models/google/gemma-2/) for detailed information on the model's architecture.
33
+
34
+ * **Training**
35
+
36
+ **Model merging.** The base model was endowed with instruction-following capabilities through a chat vector addition process. The chat vector was derived by subtracting the parameter vectors of [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b) from [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it), as follows.
37
+
38
+ ~~~~text
39
+ rinna/gemma-2-baku-2b + 1.0 * (google/gemma-2-2b-it - google/gemma-2-2b)
40
+ ~~~~
41
+
42
+ During this process, the embedding layer was excluded during the subtraction and addition of parameter vectors.
43
+
44
+ **OPRO** was applied using a subset of the following dataset to further refine the performance of the merged model.
45
+
46
+ - rinna's internal dataset
47
+
48
+ * **Contributors**
49
+
50
+ - [Xinqi Chen](https://huggingface.co/Keely0419)
51
+ - [Toshiaki Wakatsuki](https://huggingface.co/t-w)
52
+ - [Kei Sawada](https://huggingface.co/keisawada)
53
+
54
+ ---
55
+
56
+ # Benchmarking
57
+
58
+ Please refer to [rinna's LM benchmark page](https://rinnakk.github.io/research/benchmarks/lm/index.html).
59
+
60
+ ---
61
+
62
+ # How to use the model
63
+
64
+ ~~~~python
65
+ from transformers import AutoTokenizer, AutoModelForCausalLM
66
+ import torch
67
+
68
+ model_id = "rinna/gemma-2-baku-2b-it"
69
+ dtype = torch.bfloat16
70
+
71
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
72
+ model = AutoModelForCausalLM.from_pretrained(
73
+ model_id,
74
+ device_map="cuda",
75
+ torch_dtype=dtype,
76
+ attn_implementation="eager",
77
+ )
78
+
79
+ chat = [
80
+ { "role": "user", "content": "西田幾多郎とはどんな人物ですか?" },
81
+ ]
82
+ prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
83
+
84
+ input_ids = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt").to(model.device)
85
+ outputs = model.generate(
86
+ input_ids,
87
+ max_new_tokens=512,
88
+ )
89
+
90
+ response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
91
+ print(response)
92
+ ~~~~
93
+
94
+ ---
95
+
96
+ # Tokenization
97
+ The model uses the original [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it) tokenizer.
98
+
99
+ ---
100
+
101
+ # How to cite
102
+ ```bibtex
103
+ @misc{rinna-gemma-2-baku-2b-it,
104
+ title = {rinna/gemma-2-baku-2b-it},
105
+ author = {Chen, Xinqi and Wakatsuki, Toshiaki and Sawada, Kei},
106
+ url = {https://huggingface.co/rinna/gemma-2-baku-2b-it}
107
+ }
108
+
109
+ @inproceedings{sawada2024release,
110
+ title = {Release of Pre-Trained Models for the {J}apanese Language},
111
+ author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
112
+ booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
113
+ month = {5},
114
+ year = {2024},
115
+ pages = {13898--13905},
116
+ url = {https://aclanthology.org/2024.lrec-main.1213},
117
+ note = {\url{https://arxiv.org/abs/2404.01657}}
118
+ }
119
+ ```
120
+ ---
121
+
122
+ # References
123
+ ```bibtex
124
+ @article{gemma-2-2024,
125
+ title = {Gemma 2},
126
+ url = {https://www.kaggle.com/models/google/gemma-2},
127
+ publisher = {Kaggle},
128
+ author = {Gemma Team},
129
+ year = {2024}
130
+ }
131
+
132
+ @article{huang2023chat,
133
+ title = {Chat Vector: A Simple Approach to Equip LLMs with Instruction Following and Model Alignment in New Languages},
134
+ author = {Huang, Shih-Cheng and Li, Pin-Zu and Hsu, Yu-Chi and Chen, Kuang-Ming and Lin, Yu Tung and Hsiao, Shih-Kai and Tzong-Han Tsai, Richard and Lee, Hung-yi},
135
+ year = {2023},
136
+ url = {https://arxiv.org/abs/2310.04799}
137
+ }
138
+
139
+ @article{hong2024orpo,
140
+ title = {Orpo: Monolithic preference optimization without reference model},
141
+ author = {Hong, Jiwoo and Lee, Noah and Thorne, James},
142
+ year = {2024},
143
+ url = {https://arxiv.org/abs/2403.07691}
144
+ }
145
+ ```
146
+ ---
147
+
148
+ # License
149
+ [Gemma Terms of Use](https://ai.google.dev/gemma/terms)
rinna.png ADDED