File size: 2,151 Bytes
a1db4b6
331cad0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a1db4b6
331cad0
a1db4b6
331cad0
 
a1db4b6
 
331cad0
a1db4b6
 
331cad0
a1db4b6
331cad0
a1db4b6
331cad0
a1db4b6
331cad0
 
a1db4b6
331cad0
 
 
a1db4b6
331cad0
 
a1db4b6
331cad0
 
 
 
 
 
 
 
a1db4b6
331cad0
 
 
a1db4b6
331cad0
 
a1db4b6
331cad0
 
 
 
 
 
 
a1db4b6
 
331cad0
a1db4b6
331cad0
 
 
 
a1db4b6
331cad0
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
---
license: cc-by-nc-4.0
base_model: johnsnowlabs/CodeGemma-2B-Slerp
tags:
- generated_from_trainer
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
model-index:
- name: CodeGemma-2B-Slerp-dora
  results: []
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# CodeGemma-2B-Slerp-dora
![image/png](https://cdn-uploads.huggingface.co/production/uploads/660cfe98280a82e38fe4ef49/JrTnaEV4AapbLwx0Cb-Lc.png)

<!-- Provide a quick summary of what the model is/does. -->
CodeGemma-2B-Slerp-dora is a DPO fine-tuned of [johnsnowlabs/CodeGemma-2B-Slerp](https://huggingface.co/johnsnowlabs/CodeGemma-2B-Slerp) on [argilla/distilabel-intel-orca-dpo-pairs](https://huggingface.co/datasets/argilla/distilabel-intel-orca-dpo-pairs) preference dataset using DoRA. The model has been trained for 1080 steps. All hyperparams are given below.


## 🏆 Evaluation results

### Coming Soom

## Usage

```python
!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "johnsnowlabs/CodeGemma-2B-dora"
messages = [{"role": "user", "content": "Explain what is Machine learning."}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```

### Training hyperparameters
The following hyperparameters were used during training:

- learning_rate: 2e-04
- train_batch_size: 1
- gradient_accumulation_steps: 8
- optimizer: PagedAdamW with 32-bit precision
- lr_scheduler_type: Cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 1080


### LoRA Config

- lora_r: 16
- lora_alpha: 32
- lora_dropout: 0.05
- peft_use_dora: true

### Framework versions
- Transformers 4.39.0.dev0
- Peft 0.9.1.dev0
- Datasets 2.18.0
- torch 2.2.0
- accelerate 0.27.2