metadata
tags:
- autotrain
- text-generation
- mistral
- fine-tune
- text-generation-inference
- chat
- Trained with Auto-train
- pytorch
widget:
- text: 'I love AutoTrain because '
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
Model Trained Using AutoTrain
The mistral-7b-fraud2-finetuned Large Language Model (LLM) is a fine-tuned version of the Mistral-7B-v0.1 generative text model using a variety of synthetically generated Fraudulent transcripts datasets.
For full details of this model please read release blog post
Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by [INST]
and [\INST]
tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.
E.g.
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1")
text = "<s>[INST] Below is a conversation transcript [/INST]"
"Your credit card has been stolen, and you need to contact us to resolve the issue. We will help you protect your information and prevent further fraud.</s> "
"[INST] Analyze the conversation and determine if it's fraudulent or legitimate. [/INST]"
encodeds = tokenizer(text, return_tensors="pt", add_special_tokens=False)
model_inputs = encodeds.to(device)
model.to(device)
generated_ids = model.generate(**model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
Model Architecture
This instruction model is based on Mistral-7B-v0.1, a transformer model with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
Version
- v1
The Team
- BILIC TEAM OF AI ENGINEERS