edumunozsala commited on
Commit
da31a02
1 Parent(s): 6b58971

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +171 -175
README.md CHANGED
@@ -1,199 +1,195 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
4
  ---
5
 
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
-
11
-
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
102
 
103
  ## Evaluation
 
104
 
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
 
167
- #### Software
168
 
169
- [More Information Needed]
 
 
170
 
171
- ## Citation [optional]
172
 
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
 
174
 
175
- **BibTeX:**
176
 
177
- [More Information Needed]
 
178
 
179
- **APA:**
 
180
 
181
- [More Information Needed]
 
182
 
183
- ## Glossary [optional]
 
184
 
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
 
186
 
187
- [More Information Needed]
188
 
189
- ## More Information [optional]
 
 
 
 
 
190
 
191
- [More Information Needed]
192
 
193
- ## Model Card Authors [optional]
194
 
195
- [More Information Needed]
196
 
197
- ## Model Card Contact
198
 
199
- [More Information Needed]
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
+ license: apache-2.0
4
+ datasets:
5
+ - iamtarun/python_code_instructions_18k_alpaca
6
+ language:
7
+ - en
8
+ metrics:
9
+ - rouge
10
+ pipeline_tag: text-generation
11
  ---
12
 
13
+ # Phi-3-mini 3.8B QLoRA Python Coder 👩‍💻
14
+
15
+ **Phi-3-mini 3.8B** fine-tuned on the **python_code_instructions_18k_alpaca Code instructions dataset** by using the method **QLoRA** with [PEFT](https://github.com/huggingface/peft) library.
16
+
17
+ ## Pretrained description
18
+
19
+ [Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct)
20
+
21
+ The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties. The model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support.
22
+
23
+ ## Tokenizer
24
+ Phi-3 Mini-4K-Instruct supports a vocabulary size of up to 32064 tokens. The tokenizer files already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
25
+
26
+ ## Training data
27
+
28
+ [python_code_instructions_18k_alpaca](https://huggingface.co/datasets/iamtarun/python_code_instructions_18k_alpaca)
29
+
30
+ The dataset contains problem descriptions and code in python language. This dataset is taken from sahil2801/code_instructions_120k, which adds a prompt column in alpaca style.
31
+
32
+ ### Chat Format
33
+ Given the nature of the training data, the Phi-3 Mini-4K-Instruct model is best suited for prompts using the chat format as follows. You can provide the prompt as a question with a generic template as follow:
34
+
35
+ ```
36
+ <|user|>\nQuestion <|end|>\n<|assistant|>
37
+ ```
38
+
39
+ For example:
40
+
41
+ ```
42
+ <|user|>
43
+ How to explain Internet for a medieval knight?<|end|>
44
+ <|assistant|>
45
+ ```
46
+
47
+ where the model generates the text after <|assistant|> . In case of few-shots prompt, the prompt can be formatted as the following:
48
+ ```
49
+ <|user|>
50
+ I am going to Paris, what should I see?<|end|>
51
+ <|assistant|>
52
+ Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
53
+ <|user|>
54
+ What is so great about #1?<|end|>
55
+ <|assistant|>
56
+ ```
57
+
58
+ ### Training hyperparameters
59
+
60
+ The following `bitsandbytes` and `PEFT` config was used during training:
61
+ ```py
62
+ ################################################################################
63
+ # bitsandbytes parameters
64
+ ################################################################################
65
+ # Activate 4-bit precision base model loading
66
+ use_4bit = True
67
+ # Compute dtype for 4-bit base models
68
+ bnb_4bit_compute_dtype = "bfloat16"
69
+ # Quantization type (fp4 or nf4)
70
+ bnb_4bit_quant_type = "nf4"
71
+ # Activate nested quantization for 4-bit base models (double quantization)
72
+ use_double_quant = True
73
+
74
+
75
+ ################################################################################
76
+ # LoRA parameters
77
+ ################################################################################
78
+ # LoRA attention dimension
79
+ lora_r = 16
80
+ # Alpha parameter for LoRA scaling
81
+ lora_alpha = 16
82
+ # Dropout probability for LoRA layers
83
+ lora_dropout = 0.05
84
+ # Modules
85
+ target_modules= ['k_proj', 'q_proj', 'v_proj', 'o_proj', "gate_proj", "down_proj", "up_proj"]
86
+ ```
87
+
88
+ **SFTTrainer arguments**
89
+ ```py
90
+ evaluation_strategy="steps",
91
+ do_eval=True,
92
+ optim="paged_adamw_8bit",
93
+ per_device_train_batch_size=4,
94
+ gradient_accumulation_steps=8,
95
+ per_device_eval_batch_size=4,
96
+ log_level="debug",
97
+ save_strategy="epoch",
98
+ logging_steps=100,
99
+ learning_rate=1e-4,
100
+ fp16 = not torch.cuda.is_bf16_supported(),
101
+ bf16 = torch.cuda.is_bf16_supported(),
102
+ eval_steps=100,
103
+ num_train_epochs=3,
104
+ warmup_ratio=0.1,
105
+ lr_scheduler_type="linear",
106
+ report_to="wandb",
107
+ ```
108
+
109
+ ### Framework versions
110
+ - PEFT 0.4.0
111
+
112
+ ## Training
113
+
114
+ ```text
115
+ Step Training Loss Validation Loss
116
+ 100 1.142200 0.662472
117
+ 200 0.623800 0.600241
118
+ 300 0.593200 0.590614
119
+ 400 0.592600 0.585953
120
+ 500 0.579400 0.583388
121
+ 600 0.586800 0.581465
122
+ 700 0.571100 0.579619
123
+ 800 0.572900 0.578471
124
+ 900 0.585800 0.577197
125
+ 1000 0.573200 0.576328
126
+ 1100 0.573600 0.575592
127
+ 1200 0.563800 0.575420
128
+ 1300 0.576900 0.574614
129
+ 1400 0.566800 0.574540
130
+ 1500 0.567500 0.574162
131
+ 1600 0.569300 0.574146
132
+ ```
133
 
134
  ## Evaluation
135
+ Evaluating on a test dataset of 500 samples:
136
 
137
+ ```text
138
+ Rouge 1 Mean: 56.65322508234244
139
+ Rouge 2 Mean: 37.547274096577084
140
+ Rouge L Mean: 51.08407579855678
141
+ Rouge Lsum Mean: 56.256016384803075
142
+ ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
143
 
144
+ ### Example of usage
145
 
146
+ ```py
147
+ import torch
148
+ from transformers import AutoModelForCausalLM, AutoTokenizer
149
 
150
+ model_id = "edumunozsala/phi3-mini-python-code-20k"
151
 
152
+ tokenizer = AutoTokenizer.from_pretrained(hf_model_repo,trust_remote_code=True)
153
+ model = AutoModelForCausalLM.from_pretrained(hf_model_repo, trust_remote_code=True, torch_dtype="auto", device_map="cuda")
154
 
 
155
 
156
+ instruction="Create an algorithm in Python to sort an array of numbers."
157
+ input="[9, 3, 5, 1, 6]"
158
 
159
+ prompt = f"""### Instruction:
160
+ Below is an instruction that describes a task. Write a response that appropriately completes the request.
161
 
162
+ ### Instruction:
163
+ {instruction}
164
 
165
+ ### Input:
166
+ {input}
167
 
168
+ ### Output:
169
+ """
170
 
171
+ pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
172
 
173
+ # Function to execute inference on a prompt
174
+ def test_inference(prompt):
175
+ prompt = pipe.tokenizer.apply_chat_template([{"role": "user", "content": prompt}], tokenize=False, add_generation_prompt=True)
176
+ outputs = pipe(prompt, max_new_tokens=256, do_sample=True, num_beams=1, temperature=0.3, top_k=50, top_p=0.95,
177
+ max_time= 180) #, eos_token_id=eos_token)
178
+ return outputs[0]['generated_text'][len(prompt):].strip()
179
 
 
180
 
181
+ test_inference(prompt)
182
 
183
+ ```
184
 
185
+ ### Citation
186
 
187
+ ```
188
+ @misc {edumunozsala_2023,
189
+ author = { {Eduardo Muñoz} },
190
+ title = { phi3-mini-4k-qlora-python-code-20k },
191
+ year = 2024,
192
+ url = { https://huggingface.co/edumunozsala/phi3-mini-4k-qlora-python-code-20k },
193
+ publisher = { Hugging Face }
194
+ }
195
+ ```