jonathanjordan21 commited on
Commit
5b0ffc8
1 Parent(s): ddfadbd

Upload model

Browse files
Files changed (2) hide show
  1. README.md +23 -65
  2. adapter_model.safetensors +1 -1
README.md CHANGED
@@ -1,89 +1,47 @@
1
  ---
2
  library_name: peft
3
  base_model: declare-lab/flan-alpaca-base
4
- datasets:
5
- - knowrohit07/know_sql
6
- license: mit
7
- language:
8
- - en
9
- pipeline_tag: text2text-generation
10
- tags:
11
- - sql
12
- - query
13
- - database
14
  ---
15
 
16
- ## Model Details
17
-
18
- ### Model Description
19
-
20
- This model is based on the declare-lab/flan-alpaca-base model finetuned with knowrohit07/know_sql dataset.
21
 
22
- - **Developed by:** Jonathan Jordan
23
- - **Model type:** FLAN Alpaca
24
- - **Language(s) (NLP):** English
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model:** declare-lab/flan-alpaca-base
27
-
28
- ## Uses
29
 
30
- The model generates a string of SQL query based on a question and MySQL table schema.
31
- You can modify the table schema to match MySQL table schema if you are using different type of SQL database (e.g. PostgreSQL, Oracle, etc).
32
- The generated SQL query can be run perfectly on the python SQL connection (e.g. psycopg2, mysql_connector, etc).
33
 
34
- #### Limitations
35
- 1. The question MUST be in english
36
- 2. Keep in mind about the difference in data type naming between MySQL and the other SQL databases
37
- 3. The output always starts with SELECT *, you can't choose which columns to retrieve.
38
- 4. Aggregation function is not supported
39
 
40
- ### Input Example
41
- ```python
42
- """Question: what is What was the result of the election in the Florida 18 district?\nTable: table_1341598_10 (result VARCHAR, district VARCHAR)\nSQL: """
43
- ```
44
- ### Output Example
45
- ```python
46
- """SELECT * FROM table_1341598_10 WHERE district = "Florida 18""""
47
- ```
48
-
49
- ### How to use
50
- Load model
51
 
52
- ```python
53
- from peft import get_peft_config, get_peft_model, TaskType
54
- from peft import PeftConfig, PeftModel
55
- from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
56
 
57
- model_id = "jonathanjordan21/flan-alpaca-base-finetuned-lora-knowSQL"
58
- config = PeftConfig.from_pretrained(model_id)
59
- model_ = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path, return_dict=True)
60
- tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
61
 
62
- model = PeftModel.from_pretrained(model_, model_id)
63
- model = get_peft_model(model,config)
64
- ```
65
 
66
- Model inference
67
 
68
- ```python
69
- question = "server of user id 11 with status active and server id 10"
70
- table = "table_name_77 ( user id INTEGER, status VARCHAR, server id INTEGER )"
 
 
 
 
71
 
72
- test = f"""Question: {question}\nTable: {table}\nSQL: """
73
 
74
- p = tokenizer(test, return_tensors='pt')
75
 
76
- device = "cuda" if torch.cuda.is_available() else "cpu"
 
 
77
 
78
- print("output :", tokenizer.batch_decode(model.to(device).generate(**p.to(device),max_new_tokens=50),skip_special_tokens=True)[0])
79
 
80
- ```
81
 
82
- ## Performance
83
 
84
- ### Speed Performance
85
- The model inference takes about 2-3 seconds to run with Google Colab Free Tier CPU
86
 
 
87
 
88
  ### Downstream Use [optional]
89
 
@@ -247,4 +205,4 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
247
  ### Framework versions
248
 
249
 
250
- - PEFT 0.6.2
 
1
  ---
2
  library_name: peft
3
  base_model: declare-lab/flan-alpaca-base
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
+ # Model Card for Model ID
 
 
 
 
7
 
8
+ <!-- Provide a quick summary of what the model is/does. -->
 
 
 
 
 
 
9
 
 
 
 
10
 
 
 
 
 
 
11
 
12
+ ## Model Details
 
 
 
 
 
 
 
 
 
 
13
 
14
+ ### Model Description
 
 
 
15
 
16
+ <!-- Provide a longer summary of what this model is. -->
 
 
 
17
 
 
 
 
18
 
 
19
 
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
+ ### Model Sources [optional]
29
 
30
+ <!-- Provide the basic links for the model. -->
31
 
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
 
36
+ ## Uses
37
 
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
 
40
+ ### Direct Use
41
 
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
43
 
44
+ [More Information Needed]
45
 
46
  ### Downstream Use [optional]
47
 
 
205
  ### Framework versions
206
 
207
 
208
+ - PEFT 0.6.2
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a01743504ce860af99541c786ceb2897d01358a226ed32bbb6e364ac06689249
3
  size 7101392
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6333ab988e3f3a31ecaa5e417b5e624c45f1510e12c59e11305e420a4ad70fcd
3
  size 7101392