File size: 669 Bytes
a24dac0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
language: en
license: apache-2.0
tags:
- llama-2
- fine-tuning
- text-generation
---
# Fine-Tuned Llama 2 Model

This model is a fine-tuned version of [Llama 2](https://huggingface.co/llama-2) on my custom dataset.

## Model Description

Provide a description of your model here.

## Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("dlprt/fine-tuned-llama2-7b")
tokenizer = AutoTokenizer.from_pretrained("dlprt/fine-tuned-llama2-7b")

inputs = tokenizer("Hello, world!", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))