File size: 1,376 Bytes
a664900
 
 
925de87
a9262f5
f8c51e8
a9262f5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1f7b856
a9262f5
 
 
 
1e1b99b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
925de87
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
---
{}
---
Small dummy LLama2-type Model useable for Unit/Integration tests. Suitable for CPU only machines, see [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio/blob/main/tests/integration/test_integration.py) for an example integration test.

Model was created as follows:
```python
from transformers import AutoConfig, AutoTokenizer, AutoModelForCausalLM

repo_name = "MaxJeblick/llama2-0b-unit-test"
model_name = "h2oai/h2ogpt-4096-llama2-7b-chat"
config = AutoConfig.from_pretrained(model_name)
config.hidden_size = 12
config.max_position_embeddings = 32
config.intermediate_size = 24
config.num_attention_heads = 2
config.num_hidden_layers = 2
config.num_key_value_heads = 2

tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelForCausalLM.from_config(config)
print(model.num_parameters())  # 770_940

model.push_to_hub(repo_name, private=False)
tokenizer.push_to_hub(repo_name, private=False)
config.push_to_hub(repo_name, private=False)
```


Use the following configuration in [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to run a complete experiment in **5 seconds** using the default dataset and default settings otherwise:

```yaml
Validation Size: 0.1
Data Sample: 0.1
Max Length Prompt: 32
Max Length Answer: 32
Max Length: 64
Backbone Dtype: float16
Gradient Checkpointing: False
Batch Size: 8
Max Length Inference: 16
```