samvelkoch
commited on
Commit
•
b31b5be
1
Parent(s):
3d76a59
Upload model card
Browse files
README.md
CHANGED
@@ -1,199 +1,130 @@
|
|
1 |
---
|
|
|
|
|
2 |
library_name: transformers
|
3 |
-
tags:
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
|
|
|
|
|
|
|
|
|
|
5 |
|
6 |
-
# Model Card for Model ID
|
7 |
|
8 |
-
|
9 |
|
|
|
10 |
|
|
|
|
|
|
|
11 |
|
12 |
-
|
|
|
|
|
|
|
|
|
|
|
13 |
|
14 |
-
|
15 |
|
16 |
-
|
|
|
17 |
|
18 |
-
|
|
|
|
|
19 |
|
20 |
-
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
|
|
29 |
|
30 |
-
|
|
|
|
|
|
|
31 |
|
32 |
-
|
33 |
-
|
34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
-
|
|
|
|
|
|
|
37 |
|
38 |
-
|
39 |
|
40 |
-
|
41 |
|
42 |
-
|
43 |
|
44 |
-
|
|
|
45 |
|
46 |
-
|
47 |
|
48 |
-
|
49 |
|
50 |
-
|
51 |
|
52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
|
54 |
-
|
55 |
|
56 |
-
[
|
57 |
|
58 |
-
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
-
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
Use the code below to get started with the model.
|
73 |
-
|
74 |
-
[More Information Needed]
|
75 |
-
|
76 |
-
## Training Details
|
77 |
-
|
78 |
-
### Training Data
|
79 |
-
|
80 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
-
|
84 |
-
### Training Procedure
|
85 |
-
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
-
|
93 |
-
#### Training Hyperparameters
|
94 |
-
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
-
|
103 |
-
## Evaluation
|
104 |
-
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
-
|
197 |
-
## Model Card Contact
|
198 |
-
|
199 |
-
[More Information Needed]
|
|
|
1 |
---
|
2 |
+
language:
|
3 |
+
- en
|
4 |
library_name: transformers
|
5 |
+
tags:
|
6 |
+
- gpt
|
7 |
+
- llm
|
8 |
+
- large language model
|
9 |
+
- h2o-llmstudio
|
10 |
+
inference: false
|
11 |
+
thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
|
12 |
---
|
13 |
+
# Model Card
|
14 |
+
## Summary
|
15 |
+
|
16 |
+
This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio).
|
17 |
+
- Base model: [h2oai/h2ogpt-4096-llama2-7b](https://huggingface.co/h2oai/h2ogpt-4096-llama2-7b)
|
18 |
|
|
|
19 |
|
20 |
+
## Usage
|
21 |
|
22 |
+
To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
|
23 |
|
24 |
+
```bash
|
25 |
+
pip install transformers==4.40.2
|
26 |
+
```
|
27 |
|
28 |
+
Also make sure you are providing your huggingface token if the model is lying in a private repo.
|
29 |
+
- You can login to hugginface_hub by running
|
30 |
+
```python
|
31 |
+
import huggingface_hub
|
32 |
+
huggingface_hub.login(<ACCESS_TOKEN>)
|
33 |
+
```
|
34 |
|
35 |
+
You will also need to download the classification head, either manually, or by running the following code:
|
36 |
|
37 |
+
```python
|
38 |
+
from huggingface_hub import hf_hub_download
|
39 |
|
40 |
+
model_name = "samvelkoch/masked-mamba-1" # either local folder or huggingface model name
|
41 |
+
hf_hub_download(repo_id=model_name, filename="classification_head.pth", local_dir="./")
|
42 |
+
```
|
43 |
|
44 |
+
You can make classification predictions by following the example below:
|
|
|
|
|
|
|
|
|
|
|
|
|
45 |
|
46 |
+
```python
|
47 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
48 |
|
49 |
+
model_name = "samvelkoch/masked-mamba-1" # either local folder or huggingface model name
|
50 |
+
# Important: The prompt needs to be in the same format the model was trained with.
|
51 |
+
# You can find an example prompt in the experiment logs.
|
52 |
+
prompt = "How are you?"
|
53 |
|
54 |
+
tokenizer = AutoTokenizer.from_pretrained(
|
55 |
+
model_name,
|
56 |
+
trust_remote_code=True,
|
57 |
+
)
|
58 |
+
model = AutoModelForCausalLM.from_pretrained(
|
59 |
+
model_name,
|
60 |
+
torch_dtype="auto",
|
61 |
+
device_map={"": "cuda:0"},
|
62 |
+
trust_remote_code=True,
|
63 |
+
).cuda().eval()
|
64 |
|
65 |
+
head_weights = torch.load("classification_head.pth", map_location="cuda")
|
66 |
+
# settings can be arbitrary here as we overwrite with saved weights
|
67 |
+
head = torch.nn.Linear(1, 1, bias=False).to("cuda")
|
68 |
+
head.weight.data = head_weights
|
69 |
|
70 |
+
inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda")
|
71 |
|
72 |
+
out = model(**inputs).logits
|
73 |
|
74 |
+
logits = head(out[:,-1])
|
75 |
|
76 |
+
print(logits)
|
77 |
+
```
|
78 |
|
79 |
+
## Quantization and sharding
|
80 |
|
81 |
+
You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
|
82 |
|
83 |
+
## Model Architecture
|
84 |
|
85 |
+
```
|
86 |
+
LlamaForCausalLM(
|
87 |
+
(model): LlamaModel(
|
88 |
+
(embed_tokens): Embedding(32000, 4096, padding_idx=0)
|
89 |
+
(layers): ModuleList(
|
90 |
+
(0-31): 32 x LlamaDecoderLayer(
|
91 |
+
(self_attn): LlamaSdpaAttention(
|
92 |
+
(q_proj): Linear(in_features=4096, out_features=4096, bias=False)
|
93 |
+
(k_proj): Linear(in_features=4096, out_features=4096, bias=False)
|
94 |
+
(v_proj): Linear(in_features=4096, out_features=4096, bias=False)
|
95 |
+
(o_proj): Linear(in_features=4096, out_features=4096, bias=False)
|
96 |
+
(rotary_emb): LlamaRotaryEmbedding()
|
97 |
+
)
|
98 |
+
(mlp): LlamaMLP(
|
99 |
+
(gate_proj): Linear(in_features=4096, out_features=11008, bias=False)
|
100 |
+
(up_proj): Linear(in_features=4096, out_features=11008, bias=False)
|
101 |
+
(down_proj): Linear(in_features=11008, out_features=4096, bias=False)
|
102 |
+
(act_fn): SiLU()
|
103 |
+
)
|
104 |
+
(input_layernorm): LlamaRMSNorm()
|
105 |
+
(post_attention_layernorm): LlamaRMSNorm()
|
106 |
+
)
|
107 |
+
)
|
108 |
+
(norm): LlamaRMSNorm()
|
109 |
+
)
|
110 |
+
(lm_head): Linear(in_features=4096, out_features=32000, bias=False)
|
111 |
+
)
|
112 |
+
```
|
113 |
|
114 |
+
## Model Configuration
|
115 |
|
116 |
+
This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models.
|
117 |
|
|
|
118 |
|
119 |
+
## Disclaimer
|
120 |
+
|
121 |
+
Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
|
122 |
+
|
123 |
+
- Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
|
124 |
+
- Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
|
125 |
+
- Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
|
126 |
+
- Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
|
127 |
+
- Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
|
128 |
+
- Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
|
129 |
+
|
130 |
+
By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|