richardr1126
commited on
Commit
•
70c6543
1
Parent(s):
a7252cc
Update README.md
Browse files
README.md
CHANGED
@@ -10,6 +10,12 @@ datasets:
|
|
10 |
library_name: transformers
|
11 |
license: bigcode-openrail-m
|
12 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
## Citation
|
14 |
|
15 |
Please cite the repo if you use the data or code in this repo.
|
@@ -49,6 +55,14 @@ Please cite the repo if you use the data or code in this repo.
|
|
49 |
pages = "2030--2042",
|
50 |
}
|
51 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
52 |
|
53 |
## Disclaimer
|
54 |
|
|
|
10 |
library_name: transformers
|
11 |
license: bigcode-openrail-m
|
12 |
---
|
13 |
+
### Spider NatSQL Wizard Coder Summary
|
14 |
+
|
15 |
+
- This model was created by finetuning [WizardLM/WizardCoder-15B-V1.0](https://huggingface.co/WizardLM/WizardCoder-15B-V1.0) on a NatSQL enhanced Spider context training dataset: [richardr1126/spider-natsql-skeleton-context-finetune](https://huggingface.co/datasets/richardr1126/spider-natsql-skeleton-context-finetune).
|
16 |
+
- Finetuning was performed using QLoRa on a single RTX6000 48GB.
|
17 |
+
- If you want just the QLoRa/LoRA adapter it is [here](https://huggingface.co/richardr1126/qlora-spider-natsql-wizard-coder-adapter).
|
18 |
+
|
19 |
## Citation
|
20 |
|
21 |
Please cite the repo if you use the data or code in this repo.
|
|
|
55 |
pages = "2030--2042",
|
56 |
}
|
57 |
```
|
58 |
+
```
|
59 |
+
@article{dettmers2023qlora,
|
60 |
+
title={QLoRA: Efficient Finetuning of Quantized LLMs},
|
61 |
+
author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
|
62 |
+
journal={arXiv preprint arXiv:2305.14314},
|
63 |
+
year={2023}
|
64 |
+
}
|
65 |
+
```
|
66 |
|
67 |
## Disclaimer
|
68 |
|