File size: 6,952 Bytes
c1da149
918e91f
db4bad1
918e91f
 
 
 
 
 
 
 
 
 
 
 
c1da149
 
 
 
 
918e91f
 
c1da149
 
 
 
 
 
918e91f
 
 
 
c1da149
 
 
 
918e91f
c1da149
 
 
918e91f
c1da149
 
 
 
918e91f
c1da149
 
 
918e91f
 
 
 
 
 
43b047b
918e91f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c1da149
 
 
 
 
918e91f
c1da149
 
 
918e91f
c1da149
 
 
918e91f
c1da149
 
 
 
 
 
918e91f
 
 
 
 
 
 
c1da149
918e91f
c1da149
 
 
 
918e91f
 
c1da149
 
 
 
918e91f
c1da149
 
 
918e91f
c1da149
 
 
918e91f
c1da149
 
 
918e91f
c1da149
918e91f
c1da149
918e91f
c1da149
 
918e91f
c1da149
918e91f
c1da149
 
 
62d464a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
---
license: apache-2.0
library_name: transformers
datasets:
- cerebras/SlimPajama-627B
- HuggingFaceH4/ultrachat_200k
- hkust-nlp/deita-10k-v0
- Open-Orca/SlimOrca-Dedup
- cognitivecomputations/WizardLM_evol_instruct_V2_196k_unfiltered_merged_split
- HuggingFaceH4/capybara
- meta-math/MetaMathQA
- argilla/ultrafeedback-binarized-preferences-cleaned
- Intel/orca_dpo_pairs
- alexredna/oasst2_dpo_pairs
pipeline_tag: text-generation
---


## Model Details

With great enthusiasm, we unveil the Prem-1B series, open-source, multipurpose large language models developed by Prem AI. This cutting-edge SLM offers the open community and enterprises the opportunity to harness capabilities that were once exclusively available through closed model APIs, empowering them to build their own advanced language models. Our objective is to develop a model that excels at Retrieval-Augmented Generation (RAG). While Large Language Models (LLMs) store a vast amount of information within their parameters, RAG operates differently by ingesting information during runtime. This approach suggests that for RAG applications, we may not require models of immense size. With this initiative, we aim to create a Small Language Model (SLM) with an extended context length of 8192 tokens, enabling it to handle multi-turn conversations effectively. This endeavor represents our inaugural attempt to craft an SLM tailored for RAG tasks.

### Model Description

<!-- Provide a longer summary of what this model is. -->

This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.

- **Developed by:** https://premai.io/
- **Model type:** Llama
- **Language(s) (NLP):** Python
- **License:** Apache License 2.0


## Uses

The Prem-1B language model is designed for commercial and research applications involving the English language. The instruction-tuned versions of the model are tailored for conversational interactions akin to a virtual assistant. On the other hand, the pretrained variants can be fine-tuned and adapted for various natural language generation tasks beyond just dialogue.

### Out-of-Scope Use

The model must not be used in any manner that violates applicable laws or regulations, including trade compliance laws. It is also prohibited to use the model in any way that goes against the Acceptable Use Policy and the Prem-1B Community License. While the base model is intended for English language use, developers are permitted to fine-tune the Prem-1B models for other languages, provided they comply with the Prem-1B Community License and the Acceptable Use Policy.


### Recommendations

Users (both direct and downstream) should be made aware of the risks, biases, and limitations of the model. More information needed for further recommendations.

## How to Get Started with the Model

Using `AutoModelForCausalLM` and `AutoTokenizer`
```py
from transformers import AutoTokenizer, AutoModelForCausalLM

# Load the model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("premai-io/prem-1B-chat")
model = AutoModelForCausalLM.from_pretrained('premai-io/prem-1B-chat', torch_dtype=torch.bfloat16)
model = model.to('cuda')

# Setup terminators
terminators = [tokenizer.eos_token_id, tokenizer.encode('<|eot_id|>', add_special_tokens=False)[0]]

# Prepare the prompt
messages = [
    {
        "role": "system",
        "content": "You are a helpful AI assistant. You should give concise responses to very simple questions, but provide thorough responses to more complex and open-ended questions."
    },
    {
        'role': 'user',
        'content': 'Help me understand machine learning.'
    }
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

# Generate
inputs = tokenizer(prompt, return_attention_mask=False, return_tensors="pt", add_special_tokens=False)
input_ids = inputs['input_ids']
input_ids = input_ids.to(model.device)
res = model.generate(input_ids=input_ids, max_new_tokens=400, pad_token_id=tokenizer.pad_token_id, eos_token_id=terminators)
generated_text = tokenizer.decode(res[0][input_ids.shape[1]:], skip_special_tokens=True).strip()
print(generated_text)
```

Using pipelines:
```py
import torch
from transformers import pipeline

# Load the pipeline
pipe = pipeline("text-generation", model="premai-io/prem-1B-chat", torch_dtype=torch.bfloat16, device=0)

# Prepare prompt
messages = [
    {
        "role": "system",
        "content": "You are a helpful AI assistant. You should give concise responses to very simple questions, but provide thorough responses to more complex and open-ended questions."
    },
    {
        'role': 'user',
        'content': 'Help me understand machine learning.'
    }
]
prompt = pipe.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

# Setup terminators
terminators = [pipe.tokenizer.eos_token_id, pipe.tokenizer.encode('<|eot_id|>', add_special_tokens=False)[0]]

# Generate
outputs = pipe(prompt, max_new_tokens=400, do_sample=True, temperature=0.7, top_k=50, top_p=0.95, pad_token_id=pipe.tokenizer.pad_token_id, eos_token_id=terminators)
print(outputs[0]["generated_text"][len(prompt):])
```

## Training Details

### Training Data

Mentioned in blogpost: https://blog.premai.io/p/e4168cd0-36f2-4a7f-b810-50393dd65601/

### Training Procedure

Mentioned in blogpost: https://blog.premai.io/p/e4168cd0-36f2-4a7f-b810-50393dd65601/

#### Training Hyperparameters

Mentioned in blogpost: https://blog.premai.io/p/e4168cd0-36f2-4a7f-b810-50393dd65601/


## Evaluation

### Results

|Model                   |Avg  |Arc-c|Arc-e|Hellaswag|MMLU |Obqa |Piqa |Winogrande|
|------------------------|-----|-----|-----|---------|-----|-----|-----|----------|
|prem-1B                 |42.64|24.74|57.40|42.01    |24.75|21.00|72.14|56.43     |
|prem-1B-chat            |41.76|24.48|53.32|40.28    |25.27|22.20|70.89|55.88     |
|TinyLlama-1.1B-Chat-v1.0|46.16|30.03|61.53|46.56    |24.72|25.80|74.21|60.29     |
|opt-1.3b                |42.94|23.37|57.44|41.49    |24.86|23.20|71.49|58.72     |
|pythia-1b               |40.71|24.31|56.90|37.72    |23.20|18.80|70.62|53.43     |

![image/png](https://cdn-uploads.huggingface.co/production/uploads/5f440d8f79c1ba4c353d0f6d/PqscXKPvnwvymNxqYAxjR.png)


## Environmental Impact

- **Hardware Type:** H100 GPUs
- **Hours used:** 8500


### Model Architecture and Objective

Llama based

### Compute Infrastructure

16-H100 GPUs

#### Hardware

H100 GPUs

#### Software

PyTorch, transformers, PyTorch Lightning

## Citation

https://blog.premai.io/p/e4168cd0-36f2-4a7f-b810-50393dd65601/


## Model Card Authors

https://huggingface.co/goku, https://huggingface.co/nsosio, https://huggingface.co/ucalyptus, https://huggingface.co/filopedraz

## Model Card Contact

https://huggingface.co/goku, https://huggingface.co/nsosio, https://huggingface.co/ucalyptus, https://huggingface.co/filopedraz