it looks it do not work as expected , see below

#17
by Sakura77 - opened

this is what I receive on a simple q

image.png

This is the code I used

import torch
import os
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

Set environment variable for CUDA operations

os.environ['CUDA_LAUNCH_BLOCKING'] = '1'

Directory and model identification

model_dir = r'g:\Projects\localLLM\input\model\google\gemma-2-9b-it'

Ensure the model and tokenizer are loaded onto the appropriate device

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")

Prepare device mapping for model components

device_map = "auto" if device.type == 'cuda' else {0: "cpu"}

Load model and tokenizer with explicit device mapping

model = AutoModelForCausalLM.from_pretrained(
model_dir,
device_map=device_map,
torch_dtype=torch.float16 if device.type == 'cuda' else torch.float32,
trust_remote_code=True,
local_files_only=True
)
tokenizer = AutoTokenizer.from_pretrained(
model_dir,
local_files_only=True
)

Define the pipeline with model and tokenizer

pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device=device.index if device.type == 'cuda' else -1
)

Continuous interaction loop

while True:
user_question = input("Please enter your question: ")
if user_question.lower() in ["quit", "exit", "stop"]:
print("Exiting the session.")
break

# Arguments for text generation
generation_args = {
    "max_new_tokens": 256,
    "return_full_text": False,
    "temperature": 0.2,
    "do_sample": True,
}

# Generate text based on user input
output = pipe(user_question, **generation_args)
print("Response:", output[0]['generated_text'])
Google org

Hello @Sakura77 , do you mind sharing a colab notebook that reproduces the issue so that we may take a look?

Google org

Please also make sure to use the latest transformers version (v4.42.3), thanks 🤗

Hello, I use it on my pc, see the code above, I used transformers-4.42.0.dev0-py3-none-any.whl as mentioned in the model's folder

I just installed transformers 4.42.3 , the same please see below

image.png

Sakura77 changed discussion status to closed
Sakura77 changed discussion status to open
Google org

Ok I see, let's try with this change then if you can @Sakura77 :

model = AutoModelForCausalLM.from_pretrained(
    model_dir,
    device_map=device_map,
+   torch_dtype=torch.bfloat16,
+   attn_implementation='eager',
    local_files_only=True
)

The two important parts here are torch_dtype=torch.bfloat16, as that's what the model was trained with, and attn_implementation='eager' as eager attention is really important for the gemma model.

I created again a new env , same answers, even with this changes

Load model and tokenizer with explicit device mapping

model = AutoModelForCausalLM.from_pretrained(
model_dir,
device_map=device_map,
torch_dtype=torch.bfloat16,
attn_implementation='eager',
local_files_only=True
)

Google org

Do you mind trying with the code snippet in this response, but with the 9b-it model you're using here?

https://huggingface.co/google/gemma-2-27b-it/discussions/14#668280486076c1a904c790e6

Google org

Hi @Sakura77 ! In addition to the other recommendations in this thread, could you try to add add_special_tokens: True to your generation_args?

generation_args = {
    "max_new_tokens": 256,
    "return_full_text": False,
+   "add_special_tokens": True,
    "temperature": 0.2,
    "do_sample": True,
}

Otherwise, the input to the model will be missing an initial <bos> token, and the model is very sensitive to that.

thank you for your time, now works perfect see bellow

image.png

This is the code I used :)

import torch
import os
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline

Set environment variable for CUDA operations

os.environ['CUDA_LAUNCH_BLOCKING'] = '1'

Directory and model identification

model_dir = r'g:\Projects\localLLM\input\model\google\gemma-2-9b-it'

Ensure the model and tokenizer are loaded onto the appropriate device

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(f"Using device: {device}")

Prepare device mapping for model components

device_map = "auto" if device.type == 'cuda' else {0: "cpu"}

Load model and tokenizer with explicit device mapping

model = AutoModelForCausalLM.from_pretrained(
model_dir,
device_map=device_map,
torch_dtype=torch.bfloat16,
attn_implementation='eager',
local_files_only=True
)
tokenizer = AutoTokenizer.from_pretrained(
model_dir,
local_files_only=True
)

Define the pipeline with model and tokenizer

pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
device=device.index if device.type == 'cuda' else -1
)

Continuous interaction loop

while True:
user_question = input("Please enter your question: ")
if user_question.lower() in ["quit", "exit", "stop"]:
print("Exiting the session.")
break

# Arguments for text generation
generation_args = {
    "max_new_tokens": 256,
    "return_full_text": False,
    "add_special_tokens": True,
    "temperature": 0.2,
    "do_sample": True,
}

# Generate text based on user input
output = pipe(user_question, **generation_args)
print("Response:", output[0]['generated_text'])
Google org

Awesome, glad to hear it! Thanks for working with us on this one @Sakura77

Hi @Sakura77 ! In addition to the other recommendations in this thread, could you try to add add_special_tokens: True to your generation_args?

generation_args = {
    "max_new_tokens": 256,
    "return_full_text": False,
+   "add_special_tokens": True,
    "temperature": 0.2,
    "do_sample": True,
}

Otherwise, the input to the model will be missing an initial <bos> token, and the model is very sensitive to that.

But both the 'eager' and 'add_special_tokens' is not specified in the model card right? Is it possible to add an official instruction on how to do inference with gemma 2 correctly? Thanks!

Hi @OliverNova @Sakura77 👋
The add_special_tokens issue was a bug in our pipeline code, fixed in this PR 🤗

Sign up or log in to comment