SauravMaheshkar commited on
Commit
75d1ef2
1 Parent(s): b5371e5

docs: update model card

Browse files
Files changed (1) hide show
  1. README.md +25 -0
README.md ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - conll2003
5
+ language:
6
+ - en
7
+ metrics:
8
+ - f1
9
+ library_name: peft
10
+ pipeline_tag: token-classification
11
+ tags:
12
+ - unsloth
13
+ - llama-2
14
+ ---
15
+
16
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="150"/>](https://github.com/unslothai/unsloth)
17
+
18
+ At the moment of writing the 🤗 transformers library doesn't have a Llama implementation for Token Classification ([although there is a open PR](https://github.com/huggingface/transformers/pull/29878)).
19
+
20
+ This model is based on a [implementation](https://github.com/huggingface/transformers/issues/26521#issuecomment-1868284434) by community member [@KoichiYasuoka](https://github.com/KoichiYasuoka).
21
+
22
+ * Base Model: `unsloth/llama-2-13b-bnb-4bit`
23
+ * LORA Model Adaptation with rank 16 and alpha 32, other adapter configurations can be found in [`adapter_config.json`](https://huggingface.co/SauravMaheshkar/unsloth-llama-2-7b-bnb-4bit-conll2003-rank-4/blob/main/adapter_config.json)
24
+
25
+ This model was only trained for a single epoch, however a notebook is made available for those who want to train on other datasets for longer.