YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
BrokenSoul/llama-2-7b-miniguanaco
This is a test model finetuned for learning.
How to use
from transformers import (
AutoTokenizer,
pipeline
)
model_name = "BrokenSoul/llama-2-7b-miniguanaco"
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
prompt = "What is a large language model?"
pipe = pipeline(task="text-generation", model=model_name, tokenizer=tokenizer, max_length=200)
result = pipe(f"<s>[INST] {prompt} [/INST]")
print(result[0]['generated_text'])
Training data
mlabonne/guanaco-llama2-1k dataset.
Training procedure
It was trained following the maximelabonne's guide. all credits for him.
license: apache-2.0 language: - en pipeline_tag: text-generation
- Downloads last month
- 7
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.