edumunozsala commited on
Commit
831ad12
1 Parent(s): d6f82aa

Upload README.md

Browse files

Fipx some simple mistakes

Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -16,9 +16,9 @@ pipeline_tag: text-generation
16
  ---
17
 
18
 
19
- # LlaMa 2 7b 4-bit Python Coder 👩‍💻 :man_technologist:
20
 
21
- **LlaMa-2 7b** fine-tuned on the **CodeAlpaca 20k instructions dataset** by using the method **QLoRA** in 4-bit with [PEFT](https://github.com/huggingface/peft) library.
22
 
23
  ## Pretrained description
24
 
@@ -77,6 +77,7 @@ The following `bitsandbytes` quantization config was used during training:
77
  - PEFT 0.4.0
78
 
79
  ### Training metrics
 
80
  {'loss': 1.044, 'learning_rate': 3.571428571428572e-05, 'epoch': 0.01}
81
  {'loss': 0.8413, 'learning_rate': 7.142857142857143e-05, 'epoch': 0.01}
82
  {'loss': 0.7299, 'learning_rate': 0.00010714285714285715, 'epoch': 0.02}
@@ -99,8 +100,10 @@ The following `bitsandbytes` quantization config was used during training:
99
  {'loss': 0.5659, 'learning_rate': 0.00019687629501847898, 'epoch': 0.11}
100
  {'loss': 0.5754, 'learning_rate': 0.00019643007196568606, 'epoch': 0.11}
101
  {'loss': 0.5936, 'learning_rate': 0.000195954644355717, 'epoch': 0.12}
 
102
 
103
  ### Example of usage
 
104
  ```py
105
  import torch
106
  from transformers import AutoModelForCausalLM, AutoTokenizer
@@ -144,7 +147,6 @@ print(f"Generated instruction:\n{tokenizer.batch_decode(outputs.detach().cpu().n
144
  title = { llama-2-7b-int4-python-coder },
145
  year = 2023,
146
  url = { https://huggingface.co/edumunozsala/llama-2-7b-int4-python-18k-alpaca },
147
- doi = { },
148
  publisher = { Hugging Face }
149
  }
150
  ```
 
16
  ---
17
 
18
 
19
+ # LlaMa 2 7b 4-bit Python Coder 👩‍💻
20
 
21
+ **LlaMa-2 7b** fine-tuned on the **python_code_instructions_18k_alpaca Code instructions dataset** by using the method **QLoRA** in 4-bit with [PEFT](https://github.com/huggingface/peft) library.
22
 
23
  ## Pretrained description
24
 
 
77
  - PEFT 0.4.0
78
 
79
  ### Training metrics
80
+ ```
81
  {'loss': 1.044, 'learning_rate': 3.571428571428572e-05, 'epoch': 0.01}
82
  {'loss': 0.8413, 'learning_rate': 7.142857142857143e-05, 'epoch': 0.01}
83
  {'loss': 0.7299, 'learning_rate': 0.00010714285714285715, 'epoch': 0.02}
 
100
  {'loss': 0.5659, 'learning_rate': 0.00019687629501847898, 'epoch': 0.11}
101
  {'loss': 0.5754, 'learning_rate': 0.00019643007196568606, 'epoch': 0.11}
102
  {'loss': 0.5936, 'learning_rate': 0.000195954644355717, 'epoch': 0.12}
103
+ ```
104
 
105
  ### Example of usage
106
+
107
  ```py
108
  import torch
109
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
147
  title = { llama-2-7b-int4-python-coder },
148
  year = 2023,
149
  url = { https://huggingface.co/edumunozsala/llama-2-7b-int4-python-18k-alpaca },
 
150
  publisher = { Hugging Face }
151
  }
152
  ```