psinger commited on
Commit
28b8b94
1 Parent(s): d5cddac

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,138 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ library_name: transformers
5
+ license: apache-2.0
6
+ tags:
7
+ - gpt
8
+ - llm
9
+ - large language model
10
+ - h2o-llmstudio
11
+ thumbnail: >-
12
+ https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico
13
+ pipeline_tag: text-generation
14
+ ---
15
+
16
+
17
+
18
+ <div style="width: 90%; max-width: 600px; margin: 0 auto; overflow: hidden; background-color: white">
19
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/636d18755aaed143cd6698ef/LAzQu_f5WOX7vqKl4yDsY.png"
20
+ alt="Slightly cropped image"
21
+ style="width: 102%; height: 102%; object-fit: cover; object-position: center; margin: -5% -5% -5% -5%;">
22
+ </div>
23
+
24
+ ## Summary
25
+
26
+
27
+ h2o-danube3-500m-base is a foundation model trained by H2O.ai with 500 million parameters. We release two versions of this model:
28
+
29
+ | Model Name | Description |
30
+ |:-----------------------------------------------------------------------------------|:----------------|
31
+ | [h2oai/h2o-danube3-500m-base](https://huggingface.co/h2oai/h2o-danube3-500m-base) | Base model |
32
+ | [h2oai/h2o-danube3-500m-chat](https://huggingface.co/h2oai/h2o-danube3-500m-chat) | Chat model |
33
+
34
+ ## Model Architecture
35
+
36
+ We adjust the Llama 2 architecture for a total of around 500m parameters. For details, please refer to our [Technical Report](https://arxiv.org/abs/2401.16818). We use the Mistral tokenizer with a vocabulary size of 32,000 and train our model up to a context length of 8,192.
37
+
38
+ The details of the model architecture are:
39
+
40
+ | Hyperparameter | Value |
41
+ |:----------------|:-------|
42
+ | n_layers | 16 |
43
+ | n_heads | 16 |
44
+ | n_query_groups | 8 |
45
+ | n_embd | 1536 |
46
+ | vocab size | 32000 |
47
+ | sequence length | 8192 |
48
+
49
+ ## Usage
50
+
51
+ To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers` library installed.
52
+
53
+ ```bash
54
+ pip install transformers>=4.42.3
55
+ ```
56
+
57
+ ```python
58
+ import torch
59
+ from transformers import AutoModelForCausalLM, AutoTokenizer
60
+
61
+ tokenizer = AutoTokenizer.from_pretrained("h2oai/h2o-danube3-500m-base")
62
+
63
+ model = AutoModelForCausalLM.from_pretrained(
64
+ "h2oai/h2o-danube3-500m-base",
65
+ torch_dtype=torch.bfloat16,
66
+ )
67
+ model.cuda()
68
+
69
+ inputs = tokenizer("The Danube is the second longest river in Europe", return_tensors="pt").to(model.device)
70
+ res = model.generate(
71
+ **inputs,
72
+ max_new_tokens=32,
73
+ do_sample=False,
74
+ )
75
+ print(tokenizer.decode(res[0], skip_special_tokens=True))
76
+ ```
77
+
78
+ ## Quantization and sharding
79
+
80
+ You can load the models using quantization by specifying ```load_in_8bit=True``` or ```load_in_4bit=True```. Also, sharding on multiple GPUs is possible by setting ```device_map=auto```.
81
+
82
+ ## Model Architecture
83
+
84
+ ```
85
+ LlamaForCausalLM(
86
+ (model): LlamaModel(
87
+ (embed_tokens): Embedding(32000, 1536, padding_idx=0)
88
+ (layers): ModuleList(
89
+ (0-15): 16 x LlamaDecoderLayer(
90
+ (self_attn): LlamaSdpaAttention(
91
+ (q_proj): Linear(in_features=1536, out_features=1536, bias=False)
92
+ (k_proj): Linear(in_features=1536, out_features=768, bias=False)
93
+ (v_proj): Linear(in_features=1536, out_features=768, bias=False)
94
+ (o_proj): Linear(in_features=1536, out_features=1536, bias=False)
95
+ (rotary_emb): LlamaRotaryEmbedding()
96
+ )
97
+ (mlp): LlamaMLP(
98
+ (gate_proj): Linear(in_features=1536, out_features=4096, bias=False)
99
+ (up_proj): Linear(in_features=1536, out_features=4096, bias=False)
100
+ (down_proj): Linear(in_features=4096, out_features=1536, bias=False)
101
+ (act_fn): SiLU()
102
+ )
103
+ (input_layernorm): LlamaRMSNorm()
104
+ (post_attention_layernorm): LlamaRMSNorm()
105
+ )
106
+ )
107
+ (norm): LlamaRMSNorm()
108
+ )
109
+ (lm_head): Linear(in_features=1536, out_features=32000, bias=False)
110
+ )
111
+ ```
112
+
113
+ ## Benchmarks
114
+
115
+ ### 🤗 Open LLM Leaderboard v1
116
+
117
+ | Benchmark | acc_n |
118
+ |:--------------|:--------:|
119
+ | Average | 40.38 |
120
+ | ARC-challenge | 40.61 |
121
+ | Hellaswag | 60.52 |
122
+ | MMLU | 25.72 |
123
+ | TruthfulQA | 37.67 |
124
+ | Winogrande | 62.19 |
125
+ | GSM8K | 15.54 |
126
+
127
+ ## Disclaimer
128
+
129
+ Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions.
130
+
131
+ - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints.
132
+ - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion.
133
+ - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model.
134
+ - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities.
135
+ - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues.
136
+ - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes.
137
+
138
+ By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "h2oai/h2o-danube3-500m-base",
3
+ "architectures": [
4
+ "LlamaForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "bos_token_id": 1,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 1536,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 4096,
14
+ "max_position_embeddings": 8192,
15
+ "model_type": "llama",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 16,
18
+ "num_key_value_heads": 8,
19
+ "pad_token_id": 0,
20
+ "rms_norm_eps": 1e-05,
21
+ "rope_theta": 100000,
22
+ "sliding_window": null,
23
+ "tie_word_embeddings": false,
24
+ "torch_dtype": "bfloat16",
25
+ "transformers_version": "4.42.3",
26
+ "use_cache": true,
27
+ "vocab_size": 32000
28
+ }
generation_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 1,
4
+ "eos_token_id": 2,
5
+ "pad_token_id": 0,
6
+ "transformers_version": "4.42.0.dev0"
7
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:534bdecc5d58c230e1ceff943ca56d0b09ff3b70be87102403b7f59453248986
3
+ size 1027198256
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dadfd56d766715c61d2ef780a525ab43b8e6da4de6865bda3d95fdef5e134055
3
+ size 493443
tokenizer_config.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_bos_token": false,
3
+ "add_eos_token": false,
4
+ "added_tokens_decoder": {
5
+ "0": {
6
+ "content": "<unk>",
7
+ "lstrip": false,
8
+ "normalized": false,
9
+ "rstrip": false,
10
+ "single_word": false,
11
+ "special": true
12
+ },
13
+ "1": {
14
+ "content": "<s>",
15
+ "lstrip": false,
16
+ "normalized": false,
17
+ "rstrip": false,
18
+ "single_word": false,
19
+ "special": true
20
+ },
21
+ "2": {
22
+ "content": "</s>",
23
+ "lstrip": false,
24
+ "normalized": false,
25
+ "rstrip": false,
26
+ "single_word": false,
27
+ "special": true
28
+ }
29
+ },
30
+ "additional_special_tokens": [],
31
+ "bos_token": "<s>",
32
+ "clean_up_tokenization_spaces": false,
33
+ "eos_token": "</s>",
34
+ "legacy": true,
35
+ "model_max_length": 1000000000000000019884624838656,
36
+ "pad_token": null,
37
+ "sp_model_kwargs": {},
38
+ "spaces_between_special_tokens": false,
39
+ "tokenizer_class": "LlamaTokenizer",
40
+ "unk_token": "<unk>",
41
+ "use_default_system_prompt": false
42
+ }