license: llama2
language:
- it
tags:
- text-generation-inference
Model Card for LLaMAntino-2-chat-7b-UltraChat-ITA
Model description
LLaMAntino-2-chat-7b-UltraChat is a Large Language Model (LLM) that is an instruction-tuned version of LLaMAntino-2-chat-7b (an italian-adapted LLaMA 2 chat). This model aims to provide Italian NLP researchers with an improved model for italian dialogue use cases.
The model was trained using QLora and using as training data UltraChat translated to the italian language using Argos Translate. If you are interested in more details regarding the training procedure, you can find the code we used at the following link:
- Repository: https://github.com/swapUniba/LLaMAntino
NOTICE: the code has not been released yet, we apologize for the delay, it will be available asap!
- Developed by: Pierpaolo Basile, Elio Musacchio, Marco Polignano, Lucia Siciliani, Giuseppe Fiameni, Giovanni Semeraro
- Funded by: PNRR project FAIR - Future AI Research
- Compute infrastructure: Leonardo supercomputer
- Model type: LLaMA-2-chat
- Language(s) (NLP): Italian
- License: Llama 2 Community License
- Finetuned from model: swap-uniba/LLaMAntino-2-chat-7b-hf-ITA
Prompt Format
This prompt format based on the LLaMA 2 prompt template adapted to the italian language was used:
"<s>[INST] <<SYS>>\n" \
"Sei un assistente disponibile, rispettoso e onesto. " \
"Rispondi sempre nel modo piu' utile possibile, pur essendo sicuro. " \
"Le risposte non devono includere contenuti dannosi, non etici, razzisti, sessisti, tossici, pericolosi o illegali. " \
"Assicurati che le tue risposte siano socialmente imparziali e positive. " \
"Se una domanda non ha senso o non e' coerente con i fatti, spiegane il motivo invece di rispondere in modo non corretto. " \
"Se non conosci la risposta a una domanda, non condividere informazioni false.\n" \
"<</SYS>>\n\n" \
f"{user_msg_1} [/INST] {model_answer_1} </s><s>[INST] {user_msg_2} [/INST] {model_answer_2} </s> ... <s>[INST] {user_msg_N} [/INST] {model_answer_N} </s> "
We recommend using the same prompt in inference to obtain the best results!
How to Get Started with the Model
Below you can find an example of model usage:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "swap-uniba/LLaMAntino-2-chat-7b-hf-UltraChat-ITA"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
user_msg = "Ciao! Come stai?"
prompt = "<s>[INST] <<SYS>>\n" \
"Sei un assistente disponibile, rispettoso e onesto. " \
"Rispondi sempre nel modo piu' utile possibile, pur essendo sicuro. " \
"Le risposte non devono includere contenuti dannosi, non etici, razzisti, sessisti, tossici, pericolosi o illegali. " \
"Assicurati che le tue risposte siano socialmente imparziali e positive. " \
"Se una domanda non ha senso o non e' coerente con i fatti, spiegane il motivo invece di rispondere in modo non corretto. " \
"Se non conosci la risposta a una domanda, non condividere informazioni false.\n" \
"<</SYS>>\n\n" \
f"{user_msg} [/INST] "
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(input_ids=input_ids, max_length=1024)
print(tokenizer.batch_decode(outputs.detach().cpu().numpy()[:, input_ids.shape[1]:], skip_special_tokens=True)[0])
If you are facing issues when loading the model, you can try to load it quantized:
model = AutoModelForCausalLM.from_pretrained(model_id, load_in_8bit=True)
Note: The model loading strategy above requires the bitsandbytes and accelerate libraries
Evaluation
Coming soon!
Citation
If you use this model in your research, please cite the following:
Coming soon!