--- library_name: peft base_model: TheBloke/Llama-2-7b-Chat-GPTQ pipeline_tag: text-generation inference: false license: openrail language: - en datasets: - flytech/python-codes-25k tags: - text2code - LoRA - GPTQ - Llama-2-7B-Chat - text2python - instruction2code --- # Llama-2-7b-Chat-GPTQ fine-tuned on PYTHON-CODES-25K Generate Python code that accomplishes the task instructed. ## LoRA Adpater Head ### Description Parameter Efficient Finetuning(PEFT) a 4bit quantized Llama-2-7b-Chat from TheBloke/Llama-2-7b-Chat-GPTQ on flytech/python-codes-25k dataset. - **Language(s) (NLP):** English - **License:** openrail - **Qunatization:** GPTQ 4bit - **PEFT:** LoRA - **Finetuned from model [TheBloke/Llama-2-7b-Chat-GPTQ](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GPTQ)** - **Dataset:** [flytech/python-codes-25k](https://huggingface.co/datasets/flytech/python-codes-25k) ## Intended uses & limitations Addressing the efficay of Quantization and PEFT. Implemented as a personal Project. ### How to use The quantized model is finetuned as PEFT. We have the trained Adapter.
The trained adpated needs to be merged with Base Model on which it was trained. ```python instruction = """model_input = "Help me set up my daily to-do list!"""" ``` ```python from peft import PeftModel, PeftConfig from transformers import AutoModelForCausalLM config = PeftConfig.from_pretrained("SwastikM/Llama-2-7B-Chat-text2code") model = AutoModelForCausalLM.from_pretrained("TheBloke/Llama-2-7b-Chat-GPTQ") model = PeftModel.from_pretrained(model, "SwastikM/Llama-2-7B-Chat-text2code") tokenizer = AutoTokenizer.from_pretrained("SwastikM/Llama-2-7B-Chat-text2code") inputs = tokenizer(instruction, return_tensors="pt").input_ids.to('cuda') outputs = model.generate(inputs, max_new_tokens=500, do_sample=False, num_beams=1) code = tokenizer.decode(outputs[0], skip_special_tokens=True) print(code) ``` ## Training Details ### Training Data [gretelai/synthetic_text_to_sql](https://huggingface.co/datasets/gretelai/synthetic_text_to_sql) ### Training Procedure HuggingFace Accelerate with Training Loop. #### Preprocessing - ***Encoder Input:*** "sql_prompt: " + data['sql_prompt']+" sql_context: "+data['sql_context'] - ***Decoder Input:*** data['sql'] #### Training Hyperparameters - **Optimizer:** AdamW - **lr:** 2e-5 - **decay:** linear - **num_warmup_steps:** 0 - **batch_size:** 8 - **num_training_steps:** 12500 #### Hardware - **GPU:** P100 ### Citing Dataset and BaseModel ``` @software{gretel-synthetic-text-to-sql-2024, author = {Meyer, Yev and Emadi, Marjan and Nathawani, Dhruv and Ramaswamy, Lipika and Boyd, Kendrick and Van Segbroeck, Maarten and Grossman, Matthew and Mlocek, Piotr and Newberry, Drew}, title = {{Synthetic-Text-To-SQL}: A synthetic dataset for training language models to generate SQL queries from natural language prompts}, month = {April}, year = {2024}, url = {https://huggingface.co/datasets/gretelai/synthetic-text-to-sql} } ``` ``` @article{DBLP:journals/corr/abs-1910-13461, author = {Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Veselin Stoyanov and Luke Zettlemoyer}, title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension}, journal = {CoRR}, volume = {abs/1910.13461}, year = {2019}, url = {http://arxiv.org/abs/1910.13461}, eprinttype = {arXiv}, eprint = {1910.13461}, timestamp = {Thu, 31 Oct 2019 14:02:26 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` ## Additional Information - ***Github:*** [Repository](https://github.com/swastikmaiti/SwastikM-bart-large-nl2sql.git) ## Acknowledgment Thanks to [@AI at Meta](https://huggingface.co/facebook) for adding the Pre Trained Model. Thanks to [@Gretel.ai](https://huggingface.co/gretelai) for adding the datset. ## Model Card Authors Swastik Maiti