File size: 4,320 Bytes
558b45a
bb81dc4
 
 
 
 
 
 
 
 
6ac4e60
 
 
bb81dc4
 
1bfb00d
bb81dc4
 
 
558b45a
bb81dc4
 
1bfb00d
bb81dc4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
228fc22
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4ae69ae
bb81dc4
 
 
 
 
 
 
 
4ae69ae
bb81dc4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1027b0f
 
 
 
 
 
 
 
 
 
 
 
 
bb81dc4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
---
tags:
- generated_from_trainer
- code
- coding
- phi-2
- phi2
model-index:
- name: phi-2-coder
  results: []
license: other
license_name: microsoft-research-license
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
- code
thumbnail: https://huggingface.co/mrm8488/phi-2-coder/resolve/main/phi-2-coder-logo.png
datasets:
- HuggingFaceH4/CodeAlpaca_20K
pipeline_tag: text-generation
---

<div style="text-align:center;width:250px;height:250px;">
    <img src="https://huggingface.co/mrm8488/phi-2-coder/resolve/main/phi-2-coder-logo.png" alt="phi-2 coder logo"">
</div>


# Phi-2 Coder πŸ‘©β€πŸ’»
**Phi-2** fine-tuned on the **CodeAlpaca 20k instructions dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library.

## Model description 🧠

[Phi-2](https://huggingface.co/microsoft/phi-2)

Phi-2 is a Transformer with **2.7 billion** parameters. It was trained using the same data sources as [Phi-1.5](https://huggingface.co/microsoft/phi-1.5), augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.


## Training and evaluation data πŸ“š

[CodeAlpaca_20K](https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K): contains 20K instruction-following data used for fine-tuning the Code Alpaca model.



### Training procedure


The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 66
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2

### Training results

| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.7631        | 0.36  | 50   | 0.7174          |
| 0.6735        | 0.71  | 100  | 0.6949          |
| 0.696         | 1.07  | 150  | 0.6893          |
| 0.7861        | 1.42  | 200  | 0.6875          |
| 0.7346        | 1.78  | 250  | 0.6867          |



### HumanEval results πŸ“Š

WIP


### Example of usage πŸ‘©β€πŸ’»

```py
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "mrm8488/phi-2-coder"

tokenizer = AutoTokenizer.from_pretrained(model_id, add_bos_token=True, trust_remote_code=True, use_fast=False)

model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.float16, device="auto")

def generate(
        instruction,
        max_new_tokens=128,
        temperature=0.1,
        top_p=0.75,
        top_k=40,
        num_beams=2,
        **kwargs,
):
    prompt = "Instruct: " + instruction + "\nOutput:"
    print(prompt)
    inputs = tokenizer(prompt, return_tensors="pt")
    input_ids = inputs["input_ids"].to("cuda")
    attention_mask = inputs["attention_mask"].to("cuda")
  
    with torch.no_grad():
        generation_output = model.generate(
            input_ids=input_ids,
            attention_mask=attention_mask,
            max_new_tokens=max_new_tokens,
            eos_token_id = tokenizer.eos_token_id,
            use_cache=True,
            early_stopping=True
        )
    output = tokenizer.decode(generation_output[0])
    return output.split("\nOutput:")[1].lstrip("\n")

instruction = "Design a class for representing a person in Python."
print(generate(instruction))
```


### Citation
```
@misc {manuel_romero_2023,
	author       = { {Manuel Romero} },
	title        = { phi-2-coder (Revision 4ae69ae) },
	year         = 2023,
	url          = { https://huggingface.co/mrm8488/phi-2-coder },
	doi          = { 10.57967/hf/1518 },
	publisher    = { Hugging Face }
}
```