Upload model
#2
by
kjehy2001
- opened
- README.md +15 -128
- adapter_config.json +7 -2
- adapter_model.bin +1 -1
README.md
CHANGED
@@ -1,134 +1,21 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
- en
|
4 |
-
- sp
|
5 |
-
- ja
|
6 |
-
- pe
|
7 |
-
- hi
|
8 |
-
- fr
|
9 |
-
- ch
|
10 |
-
- be
|
11 |
-
- gu
|
12 |
-
- ge
|
13 |
-
- te
|
14 |
-
- it
|
15 |
-
- ar
|
16 |
-
- po
|
17 |
-
- ta
|
18 |
-
- ma
|
19 |
-
- ma
|
20 |
-
- or
|
21 |
-
- pa
|
22 |
-
- po
|
23 |
-
- ur
|
24 |
-
- ga
|
25 |
-
- he
|
26 |
-
- ko
|
27 |
-
- ca
|
28 |
-
- th
|
29 |
-
- du
|
30 |
-
- in
|
31 |
-
- vi
|
32 |
-
- bu
|
33 |
-
- fi
|
34 |
-
- ce
|
35 |
-
- la
|
36 |
-
- tu
|
37 |
-
- ru
|
38 |
-
- cr
|
39 |
-
- sw
|
40 |
-
- yo
|
41 |
-
- ku
|
42 |
-
- bu
|
43 |
-
- ma
|
44 |
-
- cz
|
45 |
-
- fi
|
46 |
-
- so
|
47 |
-
- ta
|
48 |
-
- sw
|
49 |
-
- si
|
50 |
-
- ka
|
51 |
-
- zh
|
52 |
-
- ig
|
53 |
-
- xh
|
54 |
-
- ro
|
55 |
-
- ha
|
56 |
-
- es
|
57 |
-
- sl
|
58 |
-
- li
|
59 |
-
- gr
|
60 |
-
- ne
|
61 |
-
- as
|
62 |
-
- no
|
63 |
-
|
64 |
-
widget:
|
65 |
-
- text: "Translate to German: My name is Arthur"
|
66 |
-
example_title: "Translation"
|
67 |
-
- text: "Please answer to the following question. Who is going to be the next Ballon d'or?"
|
68 |
-
example_title: "Question Answering"
|
69 |
-
- text: "Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering."
|
70 |
-
example_title: "Logical reasoning"
|
71 |
-
- text: "Please answer the following question. What is the boiling point of Nitrogen?"
|
72 |
-
example_title: "Scientific knowledge"
|
73 |
-
- text: "Answer the following yes/no question. Can you write a whole Haiku in a single tweet?"
|
74 |
-
example_title: "Yes/no question"
|
75 |
-
- text: "Answer the following yes/no question by reasoning step-by-step. Can you write a whole Haiku in a single tweet?"
|
76 |
-
example_title: "Reasoning task"
|
77 |
-
- text: "Q: ( False or not False or False ) is? A: Let's think step by step"
|
78 |
-
example_title: "Boolean Expressions"
|
79 |
-
- text: "The square root of x is the cube root of y. What is y to the power of 2, if x = 4?"
|
80 |
-
example_title: "Math reasoning"
|
81 |
-
- text: "Premise: At my age you will probably have learnt one lesson. Hypothesis: It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?"
|
82 |
-
example_title: "Premise and hypothesis"
|
83 |
-
|
84 |
-
tags:
|
85 |
-
- text2text-generation
|
86 |
-
|
87 |
-
datasets:
|
88 |
-
- svakulenk0/qrecc
|
89 |
-
- taskmaster2
|
90 |
-
- djaym7/wiki_dialog
|
91 |
-
- deepmind/code_contests
|
92 |
-
- lambada
|
93 |
-
- gsm8k
|
94 |
-
- aqua_rat
|
95 |
-
- esnli
|
96 |
-
- quasc
|
97 |
-
- qed
|
98 |
-
- financial_phrasebank
|
99 |
-
|
100 |
-
|
101 |
-
license: apache-2.0
|
102 |
---
|
|
|
103 |
|
104 |
-
# Model Card for LoRA-FLAN-T5 large
|
105 |
-
|
106 |
-
![model image](https://s3.amazonaws.com/moonup/production/uploads/1666363435475-62441d1d9fdefb55a0b7d12c.png)
|
107 |
-
|
108 |
-
This repository contains the LoRA (Low Rank Adapters) of `flan-t5-large` that has been fine-tuned on [`financial_phrasebank`](https://huggingface.co/datasets/financial_phrasebank) dataset.
|
109 |
-
|
110 |
-
## Usage
|
111 |
-
|
112 |
-
Use this adapter with `peft` library
|
113 |
-
|
114 |
-
```python
|
115 |
-
# pip install peft transformers
|
116 |
-
import torch
|
117 |
-
from peft import PeftModel, PeftConfig
|
118 |
-
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
119 |
-
|
120 |
-
peft_model_id = "ybelkada/flan-t5-large-financial-phrasebank-lora"
|
121 |
-
config = PeftConfig.from_pretrained(peft_model_id)
|
122 |
|
123 |
-
|
124 |
-
|
125 |
-
|
126 |
-
|
127 |
-
|
128 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
129 |
|
130 |
-
# Load the Lora model
|
131 |
-
model = PeftModel.from_pretrained(model, peft_model_id)
|
132 |
-
```
|
133 |
|
134 |
-
|
|
|
1 |
---
|
2 |
+
library_name: peft
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
+
## Training procedure
|
5 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
|
7 |
+
The following `bitsandbytes` quantization config was used during training:
|
8 |
+
- quant_method: bitsandbytes
|
9 |
+
- load_in_8bit: True
|
10 |
+
- load_in_4bit: False
|
11 |
+
- llm_int8_threshold: 6.0
|
12 |
+
- llm_int8_skip_modules: None
|
13 |
+
- llm_int8_enable_fp32_cpu_offload: False
|
14 |
+
- llm_int8_has_fp16_weight: False
|
15 |
+
- bnb_4bit_quant_type: fp4
|
16 |
+
- bnb_4bit_use_double_quant: False
|
17 |
+
- bnb_4bit_compute_dtype: float32
|
18 |
+
### Framework versions
|
19 |
|
|
|
|
|
|
|
20 |
|
21 |
+
- PEFT 0.6.0.dev0
|
adapter_config.json
CHANGED
@@ -1,15 +1,20 @@
|
|
1 |
{
|
|
|
|
|
2 |
"base_model_name_or_path": "google/flan-t5-large",
|
3 |
"bias": "none",
|
4 |
-
"enable_lora": null,
|
5 |
"fan_in_fan_out": false,
|
6 |
"inference_mode": true,
|
|
|
|
|
|
|
7 |
"lora_alpha": 32,
|
8 |
"lora_dropout": 0.05,
|
9 |
-
"merge_weights": false,
|
10 |
"modules_to_save": null,
|
11 |
"peft_type": "LORA",
|
12 |
"r": 16,
|
|
|
|
|
13 |
"target_modules": [
|
14 |
"q",
|
15 |
"v"
|
|
|
1 |
{
|
2 |
+
"alpha_pattern": {},
|
3 |
+
"auto_mapping": null,
|
4 |
"base_model_name_or_path": "google/flan-t5-large",
|
5 |
"bias": "none",
|
|
|
6 |
"fan_in_fan_out": false,
|
7 |
"inference_mode": true,
|
8 |
+
"init_lora_weights": true,
|
9 |
+
"layers_pattern": null,
|
10 |
+
"layers_to_transform": null,
|
11 |
"lora_alpha": 32,
|
12 |
"lora_dropout": 0.05,
|
|
|
13 |
"modules_to_save": null,
|
14 |
"peft_type": "LORA",
|
15 |
"r": 16,
|
16 |
+
"rank_pattern": {},
|
17 |
+
"revision": null,
|
18 |
"target_modules": [
|
19 |
"q",
|
20 |
"v"
|
adapter_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 18980429
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e51f2302c00f46763631da00b6a47a5326a88e4f2f691148376184ece2b71d80
|
3 |
size 18980429
|