|
--- |
|
language: |
|
- en |
|
pipeline_tag: text2text-generation |
|
metrics: |
|
- f1 |
|
tags: |
|
- SQL |
|
- plSQL |
|
- english |
|
--- |
|
|
|
This is a fine-tuned version of T5 FLAN LARGE (783M) on English in particular on the public dataset spider for text-toSQL. |
|
|
|
To initialize the model: |
|
|
|
|
|
from transformers import T5ForConditionalGeneration |
|
model = T5ForConditionalGeneration.from_pretrained("MRNH/flan-t5-large-PLsql") |
|
|
|
|
|
Use the tokenizer: |
|
|
|
|
|
tokenizer = T5ForConditionalGeneration.from_pretrained("MRNH/flan-t5-large-PLsql") |
|
|
|
input = tokenizer("<question> "+sentence["db_id"]+" </question> "+sentence["question"], |
|
text_target=sentence["query"], return_tensors='pt') |
|
|
|
To generate text using the model: |
|
|
|
output = model.generate(input["input_ids"],attention_mask=input["attention_mask"]) |
|
|
|
Training of the model is performed using the following loss computation based on the hidden state output h: |
|
|
|
h.logits, h.loss = model(input_ids=input["input_ids"], |
|
attention_mask=input["attention_mask"], |
|
labels=input["labels"]) |