--- datasets: - b-mc2/sql-create-context language: - en library_name: transformers pipeline_tag: text2text-generation tags: - text-2-sql - text-generation-inference --- This Model is based on Llama-2 7B model provided by Meta. The Model accepts text and return SQL-query. This Model has been fine-tuned on "NousResearch/Llama-2-7b-hf". ```python ! pip install transformers accelerate # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text2text-generation", model="ekshat/Llama-2-7b-chat-finetune-for-text2sql") # Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("ekshat/Llama-2-7b-chat-finetune-for-text2sql") model = AutoModelForCausalLM.from_pretrained("ekshat/Llama-2-7b-chat-finetune-for-text2sql") # Run text generation pipeline with our model context = "CREATE TABLE Student (name VARCHAR, college VARCHAR, age VARCHAR, group VARCHAR, marks VARCHAR)" question = "List the name of Students belongs to school 'St. Xavier' and having marks greater than '600'" prompt = f"""Below is an context that describes a sql query, paired with an question that provides further information. Write an answer that appropriately completes the request. ### Context: {context} ### Question: {question} ### Answer:""" sequences = pipeline( prompt, do_sample=True, top_k=10, num_return_sequences=1, eos_token_id=tokenizer.eos_token_id, max_length=200, ) for seq in sequences: print(f"Result: {seq['generated_text']}") ```