File size: 3,090 Bytes
61b1307
 
 
 
1bb4e95
61b1307
 
 
5524804
 
 
61b1307
 
 
54375ec
 
b75442f
579dcbd
4aa7c7d
 
 
 
5905897
4aa7c7d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5905897
4aa7c7d
 
 
 
 
 
 
 
61b1307
 
1fe3346
 
 
 
 
 
 
 
579dcbd
 
 
b75442f
579dcbd
 
dc48948
54375ec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5524804
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
---
base_model: Qwen/Qwen2-7B
datasets:
- macadeliccc/opus_samantha
- HuggingfaceH4/ultrachat_200k
- teknium/OpenHermes-2.5
- Sao10K/Claude-3-Opus-Instruct-15K
license: apache-2.0
language:
- en
- zh
---
# Samantha Qwen2 7B

Trained on 2x4090 using QLoRa and FSDP

+ [LoRa](macadeliccc/Samantha-Qwen2-7B-LoRa)

## Launch Using VLLM

```bash
python -m vllm.entrypoints.openai.api_server \
    --model macadeliccc/Samantha-Qwen-2-7B \
    --chat-template ./examples/template_chatml.jinja \
```

```python
from openai import OpenAI
# Set OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8000/v1"

client = OpenAI(
    api_key=openai_api_key,
    base_url=openai_api_base,
)

chat_response = client.chat.completions.create(
    model="macadeliccc/Samantha-Qwen-2-7B",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Tell me a joke."},
    ]
)
print("Chat response:", chat_response)
```

## Prompt Template

```
<|im_start|>system
You are  a friendly assistant.<|im_end|>
<|im_start|>user
What is the capital of France?<|im_end|>
<|im_start|>assistant
The capital of France is Paris.
```

## Quants

+ [AWQ](https://huggingface.co/macadeliccc/Samantha-Qwen2-7B-AWQ)


[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>

axolotl version: `0.4.0`
```yaml
base_model: Qwen/Qwen-7B
model_type: AutoModelForCausalLM
tokenizer_type: AutoTokenizer

trust_remote_code: true

load_in_8bit: false
load_in_4bit: true
strict: false

datasets:
  - path: macadeliccc/opus_samantha
    type: sharegpt
    field: conversations
    conversation: chatml
  - path: uncensored-ultrachat.json
    type: sharegpt
    field: conversations
    conversation: chatml
  - path: openhermes_200k.json
    type: sharegpt
    field: conversations
    conversation: chatml
  - path: opus_instruct.json
    type: sharegpt
    field: conversations
    conversation: chatml

chat_template: chatml
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./outputs/lora-out

sequence_len: 2048
sample_packing: false
pad_to_sequence_len:

adapter: qlora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:

wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:

gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002

train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false

gradient_checkpointing: false
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention:

warmup_steps: 250
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```

</details><br>