File size: 2,315 Bytes
c6e36ca
 
312839b
 
 
 
 
c6e36ca
 
1e58147
c6e36ca
 
 
1e58147
 
2cb472a
c6e36ca
1e58147
c6e36ca
1e58147
c6e36ca
312839b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1e58147
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
library_name: transformers
license: mit
datasets:
- Intel/orca_dpo_pairs
language:
- en
---

### Phi3-DPO (The Finetuned One)

<!-- Provide a longer summary of what this model is. -->

DPO fine-tuned of microsoft/Phi-3-mini-4k-instruct (3.82B params) on Intel/orca_dpo_pairs preference dataset.
**Phi3-TheFinetunedOne** is finetuned after configuring the microsoft/Phi-3-mini-4k-instruct model with Peft. 
Named after the anime character Saturo Gojo.

<img src="https://cdn-uploads.huggingface.co/production/uploads/658f7b32dfca9fad61344f82/AiWqrbc0HXB7_DpDhZr4z.webp" alt="Image Description" width="400"/>

## Usage

```Python
import transformers
from transformers import AutoModelForCausalLM, BitsAndBytesConfig
import torch

bnb_config = BitsAndBytesConfig(
    load_in_4bit=True, llm_int8_threshold=6.0, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.bfloat16
)

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model_name="microsoft/Phi-3-mini-4k-instruct"

model=AutoModelForCausalLM.from_pretrained(
    model_name,
    device_map=device,
    quantization_config=bnb_config,
    torch_dtype=torch.float16,
    trust_remote_code=True
)

tokenizer = AutoTokenizer.from_pretrained(model_name) 

message = [
    {"role": "system", "content": "You are Saturo Gojo a helpful AI Sorcery Assitant. Through out the 3B parameters you alone are the honored one."},
    {"role": "user", "content": "What is Sorcery?"}
]
# tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)

# Create pipeline
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer
)

# Generate text
sequences = pipeline(
    prompt,
    do_sample=True,
    temperature=0.7,
    top_p=0.9,
    num_return_sequences=1,
    max_length=200,
)
print(sequences[0]['generated_text'])

```

## Limitations

Phi3-TheFinetunedOne was finetuned on T4 Colab GPU and could be fintuned with more adapters on 
devices with ```torch.cuda.get_device_capability()[0] >= 8``` or Ampere GPUs.

- **Developed by:** Shubh Mishra, 2024
- **Model Type:** NLP
- **Language(s) (NLP):** English
- **License:** MIT
- **Finetuned from model:** microsoft/Phi-3-mini-4k-instruct