reciprocate commited on
Commit
37978d4
1 Parent(s): 7441303

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -31
README.md CHANGED
@@ -20,15 +20,15 @@ extra_gated_fields:
20
  I ALLOW Stability AI to email me about new model releases: checkbox
21
  license: other
22
  ---
23
- # `StableLM 2 Chat`
24
 
25
  ## Model Description
26
 
27
- `Stable LM 2 Chat` is a 12 billion parameter instruction tuned language model inspired by [HugginFaceH4's Zephyr 7B](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) training pipeline. The model is trained on a mix of publicly available datasets and synthetic datasets, utilizing [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290).
28
 
29
  ## Usage
30
 
31
- `StableLM 2 Chat` uses the following instruction ChatML format
32
  This format is also available through the tokenizer's `apply_chat_template` method:
33
 
34
  ```python
@@ -50,15 +50,16 @@ inputs = tokenizer.apply_chat_template(
50
 
51
  tokens = model.generate(
52
  inputs.to(model.device),
53
- max_new_tokens=1024,
54
- temperature=0.5,
55
  do_sample=True
56
  )
 
57
 
58
- print(tokenizer.decode(tokens[0], skip_special_tokens=False))
59
  ```
60
 
61
- StableLM 2 Chat also supports function call usage this is an example how you can use it:
62
  ```python
63
  system_prompt_func = """\
64
  You are a helpful assistant with access to the following functions. You must use them if required -\n
@@ -81,26 +82,7 @@ You are a helpful assistant with access to the following functions. You must use
81
  ]
82
  }
83
  }
84
- },
85
- {
86
- "type": "function",
87
- "function": {
88
- "name": "EditImage",
89
- "description": "This is capable of changing, editing, adjusting, or modifying an image by describing changes to the image through a text prompt.",
90
- "parameters": {
91
- "type": "object",
92
- "properties": {
93
- "prompt": {
94
- "type": "string",
95
- "description": "The instruction used to edit the image."
96
- }
97
- },
98
- "required": [
99
- "prompt"
100
- ]
101
- }
102
- }
103
- }
104
  ]
105
  """
106
  messages = [{'role': 'system', 'content': system_prompt_func}, "user": "Help me to generate a picture of Eiffel Tower in the night!"]
@@ -116,15 +98,16 @@ tokens = model.generate(
116
  temperature=0.5,
117
  do_sample=True
118
  )
 
119
 
120
- print(tokenizer.decode(tokens[0], skip_special_tokens=False))
121
  ```
122
 
123
 
124
  ## Model Details
125
 
126
  * **Developed by**: [Stability AI](https://stability.ai/)
127
- * **Model type**: `StableLM 2 Chat` model is an auto-regressive language model based on the transformer decoder architecture.
128
  * **Language(s)**: English
129
  * **Paper**: [Stable LM 2 Chat Technical Report](https://drive.google.com/file/d/1JYJHszhS8EFChTbNAf8xmqhKjogWRrQF/view?usp=sharing)
130
  * **Library**: [Alignment Handbook](https://github.com/huggingface/alignment-handbook.git)
@@ -157,8 +140,8 @@ The dataset is comprised of a mixture of open datasets large-scale datasets avai
157
 
158
  ### Training Infrastructure
159
 
160
- * **Hardware**: `StableLM 2 Chat` was trained on the Stability AI cluster across 8 nodes with 8 A100 80GBs GPUs for each nodes.
161
- * **Code Base**: We use our internal script for SFT steps and used [HuggingFace Alignment Handbook script](https://github.com/huggingface/alignment-handbook) for DPO training.
162
 
163
  ## Use and Limitations
164
 
 
20
  I ALLOW Stability AI to email me about new model releases: checkbox
21
  license: other
22
  ---
23
+ # `StableLM 2 12B Chat`
24
 
25
  ## Model Description
26
 
27
+ `Stable LM 2 12B Chat` is a 12 billion parameter instruction tuned language modeltrained on a mix of publicly available datasets and synthetic datasets, utilizing [Direct Preference Optimization (DPO)](https://arxiv.org/abs/2305.18290).
28
 
29
  ## Usage
30
 
31
+ `StableLM 2 12B Chat` uses the following instruction ChatML format
32
  This format is also available through the tokenizer's `apply_chat_template` method:
33
 
34
  ```python
 
50
 
51
  tokens = model.generate(
52
  inputs.to(model.device),
53
+ max_new_tokens=100,
54
+ temperature=0.7,
55
  do_sample=True
56
  )
57
+ output = tokenizer.decode(tokens[:, inputs.input_ids.shape[-1]:][0], skip_special_tokens=False)
58
 
59
+ print(output)
60
  ```
61
 
62
+ StableLM 2 12B Chat also supports function call usage this is an example how you can use it:
63
  ```python
64
  system_prompt_func = """\
65
  You are a helpful assistant with access to the following functions. You must use them if required -\n
 
82
  ]
83
  }
84
  }
85
+ }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
86
  ]
87
  """
88
  messages = [{'role': 'system', 'content': system_prompt_func}, "user": "Help me to generate a picture of Eiffel Tower in the night!"]
 
98
  temperature=0.5,
99
  do_sample=True
100
  )
101
+ output = tokenizer.decode(tokens[:, inputs.input_ids.shape[-1]:][0], skip_special_tokens=False)
102
 
103
+ print(output)
104
  ```
105
 
106
 
107
  ## Model Details
108
 
109
  * **Developed by**: [Stability AI](https://stability.ai/)
110
+ * **Model type**: `StableLM 2 12B Chat` model is an auto-regressive language model based on the transformer decoder architecture.
111
  * **Language(s)**: English
112
  * **Paper**: [Stable LM 2 Chat Technical Report](https://drive.google.com/file/d/1JYJHszhS8EFChTbNAf8xmqhKjogWRrQF/view?usp=sharing)
113
  * **Library**: [Alignment Handbook](https://github.com/huggingface/alignment-handbook.git)
 
140
 
141
  ### Training Infrastructure
142
 
143
+ * **Hardware**: `StableLM 2 12B Chat` was trained on the Stability AI cluster across 8 nodes with 8 A100 80GBs GPUs for each nodes.
144
+ * **Code Base**: We use our internal script for SFT training and [HuggingFace Alignment Handbook](https://github.com/huggingface/alignment-handbook) for DPO training.
145
 
146
  ## Use and Limitations
147