|
--- |
|
license: mit |
|
datasets: |
|
- samsum |
|
language: |
|
- en |
|
--- |
|
|
|
# Llama-2-7b Fine-Tuned Summarization Model |
|
|
|
## Overview |
|
|
|
The Llama-2-7b Fine-Tuned Summarization Model is a language model fine-tuned for the task of text summarization using QLora. |
|
It has been fine-tuned on the samsum dataset, which contains a wide variety of coversation. |
|
|
|
## Model Details |
|
|
|
- Base Model: [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) |
|
- Fine-Tuned on: [samsum dataset](https://huggingface.co/datasets/samsum) |
|
- Language: English |
|
|
|
## How to Use |
|
|
|
You can use this model for text summarization tasks by utilizing the Hugging Face Transformers library. Here's a basic example in Python: |
|
|
|
```python |
|
import torch |
|
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig |
|
|
|
model_id = "SalmanFaroz/Llama-2-7b-samsum" |
|
|
|
bnb_config = BitsAndBytesConfig( |
|
load_in_4bit=True, |
|
bnb_4bit_use_double_quant=True, |
|
bnb_4bit_quant_type="nf4", |
|
bnb_4bit_compute_dtype=torch.bfloat16 |
|
) |
|
|
|
model = AutoModelForCausalLM.from_pretrained(model_id, quantization_config=bnb_config, device_map="auto") |
|
|
|
tokenizer = AutoTokenizer.from_pretrained(model_id) |
|
tokenizer.pad_token = tokenizer.eos_token |
|
tokenizer.padding_side = "right" |
|
|
|
# Define the input prompt |
|
prompt = """ |
|
Summarize the following conversation. |
|
|
|
### Input: |
|
Itachi: Kakashi, you must understand the gravity of the situation. The Akatsuki's plans are far more sinister than you can imagine. |
|
Kakashi: Itachi, I need more than vague warnings. Tell me what you know. |
|
Itachi: Very well. The Akatsuki seeks to capture Naruto for the power of the Nine-Tails sealed within him, but there's an even darker secret lurking within their goals. |
|
Kakashi: Darker than that? What are they truly after? |
|
Itachi: They're hunting the Tailed Beasts for a cataclysmic plan to reshape the world, and only we can stop them, together. |
|
|
|
### Summary: |
|
""" |
|
|
|
inputs = tokenizer(prompt, return_tensors='pt') |
|
output = tokenizer.decode( |
|
model.generate( |
|
inputs["input_ids"], |
|
max_new_tokens=100, |
|
)[0], |
|
skip_special_tokens=True |
|
) |
|
|
|
print("Output:",output) |
|
|
|
|