--- datasets: - mlabonne/guanaco-llama2-1k pipeline_tag: text-generation --- # Model Card for Model ID This model is a fine-tuned version of NousResearch/Llama-2-7b-chat-hf on mlabonne/guanaco-llama2-1k dataset. ## Model Details ### Model Sources [optional] - **Base Model:** NousResearch/Llama-2-7b-chat-hf - **Demo:** llama2 finetuning demo ## How to Get Started with the Model Use the code below to get started with the model. ``` from transformers import pipeline prompt = "What is a large language model?" pipe = pipeline(task="text-generation", model="likhith231/llama-2-7b-miniguanaco",max_length=200) result = pipe(f"[INST] {prompt} [/INST]") print(result[0]['generated_text']) ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 1 ### Framework versions - PEFT 0.8.2 - Transformers 4.37.0 - Pytorch 2.1.2 - Datasets 2.17.0 - Tokenizers 0.15.1