Update README.md
Browse files
README.md
CHANGED
@@ -14,36 +14,37 @@ tags:
|
|
14 |
- LoRA
|
15 |
---
|
16 |
|
17 |
-
|
18 |
-
|
19 |
-
QLoRA technique used for fine tuning the model on consumer grade GPU
|
20 |
-
SFTTrainer is also used.
|
21 |
-
|
22 |
-
Dataset used: SQuAD
|
23 |
-
Dataset Size: 87278
|
24 |
-
Training Steps: 500
|
25 |
-
|
26 |
|
27 |
-
|
28 |
-
# π Falcon-7b-chat-oasst1
|
29 |
-
|
30 |
-
Falcon-7b-chat-oasst1 is a chatbot-like model for dialogue generation. It was built by fine-tuning [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) on the [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1) dataset. This repo only includes the LoRA adapters from fine-tuning with π€'s [peft](https://github.com/huggingface/peft) package.
|
31 |
|
32 |
## Model Summary
|
33 |
|
34 |
- **Model Type:** Causal decoder-only
|
35 |
- **Language(s):** English
|
36 |
-
- **Base Model:** [Falcon-7B]
|
37 |
-
- **Dataset:** [
|
38 |
- **License(s):** Apache 2.0 inherited from "Base Model" and "Dataset"
|
39 |
|
40 |
## Model Details
|
41 |
|
42 |
-
The model was fine-tuned in
|
43 |
|
44 |
### Model Date
|
45 |
|
46 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
|
49 |
## Training procedure
|
|
|
14 |
- LoRA
|
15 |
---
|
16 |
|
17 |
+
# π Falcon-7b-QueAns
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
+
Falcon-7b-QueAns is a chatbot-like model for Question and Answering. It was built by fine-tuning [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) on the [SQuAD](https://huggingface.co/datasets/squad) dataset. This repo only includes the QLoRA adapters from fine-tuning with π€'s [peft](https://github.com/huggingface/peft) package.
|
|
|
|
|
|
|
20 |
|
21 |
## Model Summary
|
22 |
|
23 |
- **Model Type:** Causal decoder-only
|
24 |
- **Language(s):** English
|
25 |
+
- **Base Model:** [Falcon-7B] (License: [Apache 2.0])
|
26 |
+
- **Dataset:** [SQuAD](https://huggingface.co/datasets/squad) (License: [cc-by-4.0])
|
27 |
- **License(s):** Apache 2.0 inherited from "Base Model" and "Dataset"
|
28 |
|
29 |
## Model Details
|
30 |
|
31 |
+
The model was fine-tuned in 4-bit precision using π€ `peft` adapters, `transformers`, and `bitsandbytes`. Training relied on a method called "Low Rank Adapters" ([LoRA](https://arxiv.org/pdf/2106.09685.pdf)), specifically the [QLoRA](https://arxiv.org/abs/2305.14314) variant. The run took approximately 4 hours and was executed on a workstation with a single T4 NVIDIA GPU with 15 GB of available memory. See attached [Colab Notebook] used to train the model.
|
32 |
|
33 |
### Model Date
|
34 |
|
35 |
+
July 06, 2023
|
36 |
+
|
37 |
+
|
38 |
+
Open source falcon 7b large language model fine tuned on SQuAD dataset for question and answering.
|
39 |
+
|
40 |
+
QLoRA technique used for fine tuning the model on consumer grade GPU
|
41 |
+
SFTTrainer is also used.
|
42 |
+
|
43 |
+
Dataset used: SQuAD
|
44 |
+
Dataset Size: 87278
|
45 |
+
Training Steps: 500
|
46 |
+
|
47 |
+
|
48 |
|
49 |
|
50 |
## Training procedure
|