|
--- |
|
license: apache-2.0 |
|
--- |
|
|
|
# Model Card for Model ID |
|
|
|
<!-- Provide a quick summary of what the model is/does. --> |
|
|
|
slim-sql-1b-v0 is the first model in the SLIM (Specialized Language Instruct Model) series. |
|
|
|
### Benchmark Tests |
|
|
|
Evaluated against 100 test SQL queries with under 100 characters. 1 point given for exact string match, 0 given for incorrect answer. |
|
|
|
--**Accuracy Score**: **86** correct out of 100 |
|
- 8 incorrect answers attributed to query structure ordering or naming convention differences |
|
- 6 incorrect answers attributed to incorrect variable selection or aggregate function use |
|
|
|
### Model Description |
|
|
|
<!-- Provide a longer summary of what this model is. --> |
|
|
|
- **Developed by:** llmware |
|
- **Model type:** TinyLlama |
|
- **Language(s) (NLP):** English |
|
- **License:** apache-2.0 |
|
- **Finetuned from model:** [TinyLlama-1.1b - 2.5T checkpoint](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1195k-token-2.5T) |
|
|
|
|
|
### Direct Use |
|
|
|
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> |
|
|
|
slim-sql-1b-v0 is designed to generate accurate SQL queries for data retrieval on simple table structures given a natural language prompt. |
|
For best results, prompts should be structured as a question to retrieve information and perform aggregate functions on one or several variables. |
|
|
|
## Bias, Risks, and Limitations |
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
Any model can provide inaccurate or incomplete information, and should be used in conjunction with appropriate safeguards and fact-checking mechanisms. |
|
|
|
|
|
## How to Get Started with the Model |
|
|
|
The fastest way to get started with slim is through direct import in transformers: |
|
|
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
tokenizer = AutoTokenizer.from_pretrained("slim-sql-1b-v0") |
|
model = AutoModelForCausalLM.from_pretrained("slim-sql-1b-v0") |
|
|
|
Please refer to the generation_test.py files in the Files repository, which includes 100 samples and script to test the model. |
|
|
|
The sql-slim model was fine-tuned with a simple "\<human> and \<bot> wrapper", so to get the best results, wrap inference entries as: |
|
|
|
full_prompt = "<human>: " + my_prompt + "\n" + "<bot>:" |
|
|
|
The prompt consists of two sub-parts: |
|
|
|
1. Table creation prompt providing table name, variables, and variable type. |
|
2. Specific question or instruction based on the text passage |
|
|
|
Test sample example: {"context": "CREATE TABLE table_name_34 (season VARCHAR, lost VARCHAR, points VARCHAR)", "question": "Which season did the Minnesota Kicks lose 13 games and score 156 points?", "answer": "SELECT COUNT(season) FROM table_name_34 WHERE lost = 13 AND points = 156"} |
|
A subset of test samples are provided in this repo ("sql_test_100_simple_s"). |
|
|
|
For use in training, the "\<human>" tag would be associated with "context" and "question" statements, while the "\<bot>" tag will be associated with the model's output. |
|
|
|
If you are using a HuggingFace generation script: |
|
|
|
# prepare prompt packaging used in fine-tuning process |
|
new_prompt = "<human>: " + entries["context"] + "\n" + entries["query"] + "\n" + "<bot>:" |
|
|
|
inputs = tokenizer(new_prompt, return_tensors="pt") |
|
start_of_output = len(inputs.input_ids[0]) |
|
|
|
# temperature: set at 0.3 for consistency of output |
|
# max_new_tokens: set at 100 - may prematurely stop a few of the summaries |
|
|
|
outputs = model.generate( |
|
inputs.input_ids.to(device), |
|
eos_token_id=tokenizer.eos_token_id, |
|
pad_token_id=tokenizer.eos_token_id, |
|
do_sample=True, |
|
temperature=0.3, |
|
max_new_tokens=100, |
|
) |
|
|
|
output_only = tokenizer.decode(outputs[0][start_of_output:],skip_special_tokens=True) |
|
|
|
|
|
## Model Card Contact |
|
|
|
Dylan Oberst & llmware team |