Omaratef3221 commited on
Commit
d846f5f
1 Parent(s): 73b8047

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -26
README.md CHANGED
@@ -4,51 +4,82 @@ base_model: Qwen/Qwen2-0.5B-Instruct
4
  tags:
5
  - trl
6
  - sft
 
7
  - generated_from_trainer
8
  model-index:
9
- - name: tmp_trainer
10
  results: []
11
  ---
12
 
13
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
- should probably proofread and complete it, then remove this comment. -->
15
 
16
- # tmp_trainer
17
 
18
- This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on an unknown dataset.
19
 
20
- ## Model description
21
 
22
- More information needed
23
 
24
- ## Intended uses & limitations
25
 
26
- More information needed
 
 
27
 
28
- ## Training and evaluation data
29
 
30
- More information needed
 
31
 
32
- ## Training procedure
33
 
34
- ### Training hyperparameters
 
 
 
 
 
 
 
 
 
 
35
 
36
  The following hyperparameters were used during training:
37
- - learning_rate: 5e-05
38
- - train_batch_size: 8
39
- - eval_batch_size: 8
40
- - seed: 42
41
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
- - lr_scheduler_type: linear
43
- - num_epochs: 3.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
 
45
- ### Training results
46
 
 
47
 
 
 
48
 
49
- ### Framework versions
 
50
 
51
- - Transformers 4.39.0
52
- - Pytorch 2.2.0
53
- - Datasets 2.20.0
54
- - Tokenizers 0.15.2
 
4
  tags:
5
  - trl
6
  - sft
7
+ - text-to-SQL
8
  - generated_from_trainer
9
  model-index:
10
+ - name: Qwen2-0.5B-Instruct-SQL-query-generator
11
  results: []
12
  ---
13
 
14
+ # Qwen2-0.5B-Instruct-SQL-query-generator
 
15
 
16
+ This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [motherduckdb/duckdb-text2sql-25k](https://huggingface.co/datasets/motherduckdb/duckdb-text2sql-25k) dataset (first 10k rows).
17
 
18
+ ## Model Description
19
 
20
+ The Qwen2-0.5B-Instruct-SQL-query-generator is a specialized model fine-tuned to generate SQL queries from natural language text prompts. This fine-tuning allows the model to better understand and convert text inputs into corresponding SQL queries, facilitating tasks such as data retrieval and database querying through natural language interfaces.
21
 
22
+ ## Intended Uses & Limitations
23
 
24
+ ### Intended Uses
25
 
26
+ - Convert natural language questions to SQL queries.
27
+ - Facilitate data retrieval from databases using natural language.
28
+ - Assist in building natural language interfaces for databases.
29
 
30
+ ### Limitations
31
 
32
+ - The model is fine-tuned on a specific subset of data and may not generalize well to all SQL query formats or databases.
33
+ - It is recommended to review the generated SQL queries for accuracy and security, especially before executing them on live databases.
34
 
35
+ ## Training and Evaluation Data
36
 
37
+ ### Training Data
38
+
39
+ The model was fine-tuned on the [motherduckdb/duckdb-text2sql-25k](https://huggingface.co/datasets/motherduckdb/duckdb-text2sql-25k) dataset, specifically using the first 10,000 rows. This dataset includes natural language questions and their corresponding SQL queries, providing a robust foundation for training a text-to-SQL model.
40
+
41
+ ### Evaluation Data
42
+
43
+ The evaluation data used for fine-tuning was a subset of the same dataset, ensuring consistency in training and evaluation metrics.
44
+
45
+ ## Training Procedure
46
+
47
+ ### Training Hyperparameters
48
 
49
  The following hyperparameters were used during training:
50
+ - `learning_rate`: 1e-4
51
+ - `train_batch_size`: 8
52
+ - `save_steps`: 1
53
+ - `logging_steps`: 500
54
+ - `num_epochs`: 5
55
+
56
+ ### Training Frameworks
57
+
58
+ - Transformers: 4.39.0
59
+ - PyTorch: 2.2.0
60
+ - Datasets: 2.20.0
61
+ - Tokenizers: 0.15.2
62
+
63
+ ### Training Results
64
+
65
+ During the training process, the model was periodically evaluated to ensure it was learning effectively. The specific training metrics and results were logged for further analysis.
66
+
67
+ ## Model Performance
68
+
69
+ ### Evaluation Metrics
70
+
71
+ - Evaluation metrics such as accuracy, precision, recall, and F1-score were used to assess the model's performance. (Specific values can be added here if available.)
72
 
73
+ ## Usage
74
 
75
+ To use this model, simply load it from the Hugging Face Model Hub and provide natural language text prompts. The model will generate the corresponding SQL queries.
76
 
77
+ ```python
78
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
79
 
80
+ tokenizer = AutoTokenizer.from_pretrained("omaratef3221/Qwen2-0.5B-Instruct-SQL-query-generator")
81
+ model = AutoModelForSeq2SeqLM.from_pretrained("omaratef3221/Qwen2-0.5B-Instruct-SQL-query-generator")
82
 
83
+ inputs = tokenizer("Show me all employees with a salary greater than $100,000", return_tensors="pt")
84
+ outputs = model.generate(**inputs)
85
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))