File size: 2,176 Bytes
71d076b ccdf78a 71d076b ccdf78a |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
---
license: mit
language:
- en
base_model:
- NousResearch/Hermes-3-Llama-3.1-8B
pipeline_tag: text-generation
tags:
- text-generation-inference
---
## Inf
```py
!git clone https://github.com/huggingface/transformers.git
%cd transformers
!git checkout <commit_id_for_4.47.0.dev0>
!pip install .
!pip install -q accelerate==0.34.2 bitsandbytes==0.44.1 peft==0.13.1
```
#### Importing libs
```py
import os
import torch
from datasets import load_dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
pipeline,
logging,
)
```
#### Bits&Bytes Config
```py
use_4bit = True
# Compute dtype for 4-bit base models
bnb_4bit_compute_dtype = "float16"
# Quantization type (fp4 or nf4)
compute_dtype = getattr(torch, bnb_4bit_compute_dtype)
use_nested_quant = False
bnb_4bit_quant_type = "nf4"
bnb_config = BitsAndBytesConfig(
load_in_4bit=use_4bit,
bnb_4bit_quant_type=bnb_4bit_quant_type,
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=use_nested_quant,
)
```
#### Loading Model
```py
# Load base model
model_name = 'Ahanaas/HermesWithYou_V2'
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
device_map=0
)
```
#### Loading Tokenizer
```py
# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
```
# Predictions
```py
# Run text generation pipeline with our next model
system_prompt = ''''''
prompt = ''''''
pipe = pipeline(
task="text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=128, # Increase this to allow for longer outputs
temperature=0.5, # Encourages more varied outputs
top_k=50, # Limits to the top 50 tokens
do_sample=True, # Enables sampling
return_full_text=True,
)
result = pipe(f"<|im_start|>system\n {system_prompt}\n<|im_end|>\n<|im_start|>user\n{prompt}\n<|im_end|>\n<|im_start|>assistant\n")
# print(result[0]['generated_text'])
generated_text = result[0]['generated_text']
# Print the extracted response text
print(generated_text)
``` |