shenzhi-wang/Llama3-8B-Chinese-Chat生成乱码怎么解决

#25
by Terence8Tao - opened

from transformers import AutoTokenizer, AutoModelForCausalLM

torch.cuda.empty_cache()

model_id = "./model/Llama3-8B-Chinese-Chat"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
model.cuda()
model.eval()

messages = [
{"role": "user", "content": "介绍一下你自己"},
]

input_ids = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)

outputs = model.generate(
input_ids,
max_new_tokens=8192,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
我把模型下载到本地,运行了官方给的代码,打印出来的是乱码,请问是什么问题呢,运行环境:L20,显存48g,以下是生成结果的一部分:
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:03<00:00, 1.30it/s]
This is a friendly reminder - the current text generation call will exceed the model's predefined maximum length (8192). Depending on the model, you may observe exceptions, performance degradation, or nothing at all.
,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, R,,,,,, R, A,, A,,,,,, R R, A, A,,,,, R,, R,, R R, R R R,, N, R, R, R R, R N, Mr, R, Mr, N, Mr R,, A, N, Mr, One, N,, R Mr,, One,, R, Mr, R, One, N, One, | N, One, One,, My,, One, Min, One, R, One, Mr, One, N, Mr, One, One, N, One, In,, One, Mr, One, One, In, One, Up, One, So, One, My, One, Up, One, Me, One, One, Done,One, Up, My, One, So In, So In, Mr |, Up, One, Done, One, In, Did, So, In, In, One, So Made In, In, Pr, My, So Did, Did You, Did Made, So (Did, Did One, Did, Did, So | Up, Did, Did, Did, Did You, Did, Did, One, Did, Did, Did Did Did Did You Did Did Did Did Did Did Did (Did So Made Did Left Did Did Did Did Did Did Did | Did Did DidDid Did | Did Made Did Did Did Did | Did | Did Did (Did Did Did So Did Did | Did Did Did Did Did DidDid | Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did DidDid Did ( | Did Did Did DidDid Did Did Did Did Did Did Did Did Did |DidDid Did Did Did Did Did Did Did Did Did Did Did Did Did |Did Did Did Did Did | Did (Did Did Did | Did | Did Did | Did Did Did Did So Left Did Did Did Did Did Did Did | DidDid Did Did Did Did Did Did Did Did | Did | Did Did Did Did Did |Did Did |DidB Did DidDid | | Left Did Did Did Did |Did | |B | Did Did | So SoDid Did Did | Did Did Did Did Did Did Did Did Did Did Did | Did | |B Did Did Did Did Did | Did Did Did Did Did Did | | |Did | Did Did | | So | | | | Did DidSo Did Did |Did | Did |B Did | | | | |Back | |Did | Did Did | | Did | | | | | So | | | Did So |B Did Did Did Did | Did | | |B | | | | |B | | | | | | | | | | | |

可以给一下产生这个输出用的prompt吗?还是就是官方样例给的“介绍你自己”?

如果prompt是官方样例给的“介绍你自己”的话,我运行了和你提供的一模一样的代码,运行了两次,都是正常的输出。

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

torch.cuda.empty_cache()

model_id = "./model/Llama3-8B-Chinese-Chat"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16)
model.cuda()
model.eval()

messages = [

{"role": "user", "content": "介绍一下你自己"},

]

messages = [
{"role": "user", "content": "Tell me about yourself"},
]
input_ids = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)

outputs = model.generate(
input_ids,
max_new_tokens=8192,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
以上完整的代码,中英文我都试过了,运行时没有报错但生成的都是乱码,以下是英文prompt“Tell me about yourself”生成结果的一部分:
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:02<00:00, 1.91it/s]
This is a friendly reminder - the current text generation call will exceed the model's predefined maximum length (8192). Depending on the model, you may observe exceptions, performance degradation, or nothing at all.
, -, -,,,,,,,,,,,,,,,,,,,,,,,,,,,,, R,, R,, -,,,,,,,, R R, R,, or R, N, or,, R, or R,,,,, R, R,, or, N, N R, N N, N,, Or N, N N, N, R, R, Did,,, A, Did N, Did, Did, Did, R Did,, Did, R Did,, N N, Did Did, So Did Did Did Did Did,, Did Did N Did Did Did, Did Up, So Did,, N, Did, Did N, Did | One, Did Did Did, Did Did Did Did Did, One, Did, Did N, Me Did, Up, Did Up, Did, Did So Did So Did You,, So In, In, Done So | Me, So, Me,, Civil,, Pr So Did Did Did, In, Me, So You, Did Back, Did Did Did Did You, Back Back So Did Did Did Did You Did Did So Did Did Did Did Did, Did So Did Did Did Did Did Did Did Did Did Did Did Did So Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did | Did Did Did So Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did | Did Did Did Did Did Did Did |Did Me Did Did Did Did Did Did Did Did Did |Did Did Did DidDid Did | Did | Did | Did Did |D Did Did Did Did DidDid Did Did So Did Did Did Did Did |D Did Did Did Did | Did Did So Did DidDid Had Did Did Did So Did Had Did Did Did Did Did Did Did Did Did Did Did SoDid Had Did Did |DidDid Did Did | | Did |Did Did Did Did Did Did Did Did Did Did Did Did Did |Did Did Did Did So DidDid | | Did Did Did Did Did Did DidDid Did Did DidDid Did Did Did So Did Did Did Did Did Did Did Did Did | Did Did Did So |Did | Did Did |Did | Did So | So Did Did | Did Did Did Did Did Did Did Did Did Did Did Did | Did Did Did Did Did Did Did Did Did Did |B | Did Did Did Did Did Did Did Did Did Did Did Did Did | Did SoDid So Did Did | Did | So | Did Did | Did Did | |DidSo |So | Did | Did Did | So Did Did Did Did | Did So Did Did Did Did | Did | Did Did So | Did Did Did | Did | | Did Did Did | | Did Did Did | |So Did Did | Did So Did Did Did So Did | Did | Did |Did | | | Did | | Did So | So | Did Did Did | Did | Did | Did So Did | Did | | Did | Did Did So Did | Did Did Did Did Did | | | | Did Did Did | | Did Did Did Did | Did | Did | Did So Did | Did | Did | | | | So | Did | Did | Did | So Did | Did | | | | | | So | | | |So | | So Did Did | | Did | | | | Did So Did Did | | | | | Did Did | Did | | | | | So | | Did | | Did | | | | | | | | Did | | | | | Did | | | | | | | | | | | So | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |, | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |, | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
有没有可能是显卡型号的问题或依赖版本的问题?以下是我的运行配置:
python 3.8.10 , torch 2.0.1,cuda:12.4,显卡:L20(48G显存)
还有个问题就是我在运行上述代码的时候花了5min以上,不知道这个推理时间是否正常,望解答,谢谢!

我运行的就是官方样例“介绍你自己”及其英文“Tell me about yourself”,代码如上
配置补充:transformers 4.40.1

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "E:\oobabooga_windows\models\shenzhi-wang_Llama3-8B-Chinese-Chat"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id, torch_dtype=torch.float16, device_map="auto", load_in_4bit=True
)

messages = [
{"role": "user", "content": "介绍一下你自己"},
]

input_ids = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)

outputs = model.generate(
input_ids,
max_new_tokens=8192,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
下面是结果:
E:\Python\Model\env\Scripts\python.exe E:\Python\Model\01.py
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.

===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run

python -m bitsandbytes

and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

bin D:\Python\Python310\lib\site-packages\bitsandbytes\libbitsandbytes_cuda121.dll
CUDA SETUP: CUDA runtime path found: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.1\bin\cudart64_12.dll
CUDA SETUP: Highest compute capability among GPUs detected: 7.5
CUDA SETUP: Detected CUDA version 121
CUDA SETUP: Loading binary D:\Python\Python310\lib\site-packages\bitsandbytes\libbitsandbytes_cuda121.dll...
Loading checkpoint shards: 100%|██████████| 4/4 [00:07<00:00, 1.99s/it]
E:\Python\Model\env\lib\site-packages\transformers\models\llama\modeling_llama.py:728: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:234.)
attn_output = torch.nn.functional.scaled_dot_product_attention(
我是一个人工智能模型,专门设计来理解和生成中文和英文的文本。我是通过大型的中文和英文数据集进行训练的,这样我能更好地理解和生成这两种语言的内容。我使用的技术是基于大型语言模型的,能够处理各种语言的文本生成任务,包括但不限于对话、文本摘要、翻译等。我的训练数据包括了大量的文本信息,这样我可以学习到语言的规则和模式,并能在未来更好地理解和生成文本。如果你有任何关于中文或英文的问题,我都很乐意帮助你。

进程已结束,退出代码为 0
没有问题。

我也遇到类似问题,AI会一直生成乱码,持续占用gpu,最终卡死ollama
使用的是最新的 wangshenzhi/llama3-8b-chinese-chat-ollama-q4
同样的输入,不一定总能复现,但确实存在

有问题:prompt: <|eot_id|><|start_header_id|>user<|end_header_id|>\n介绍一下你自己<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n

我也遇到类似问题,AI会一直生成乱码,持续占用gpu,最终卡死ollama
使用的是最新的 wangshenzhi/llama3-8b-chinese-chat-ollama-q4
同样的输入,不一定总能复现,但确实存在

update: 更新 ollama 到0.1.34版本后,问题一直没有再现

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

torch.cuda.empty_cache()

model_id = "./model/Llama3-8B-Chinese-Chat"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16)
model.cuda()
model.eval()

messages = [

{"role": "user", "content": "介绍一下你自己"},

]

messages = [
{"role": "user", "content": "Tell me about yourself"},
]
input_ids = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(model.device)

outputs = model.generate(
input_ids,
max_new_tokens=8192,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response, skip_special_tokens=True))
以上完整的代码,中英文我都试过了,运行时没有报错但生成的都是乱码,以下是英文prompt“Tell me about yourself”生成结果的一部分:
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:02<00:00, 1.91it/s]
This is a friendly reminder - the current text generation call will exceed the model's predefined maximum length (8192). Depending on the model, you may observe exceptions, performance degradation, or nothing at all.
, -, -,,,,,,,,,,,,,,,,,,,,,,,,,,,,, R,, R,, -,,,,,,,, R R, R,, or R, N, or,, R, or R,,,,, R, R,, or, N, N R, N N, N,, Or N, N N, N, R, R, Did,,, A, Did N, Did, Did, Did, R Did,, Did, R Did,, N N, Did Did, So Did Did Did Did Did,, Did Did N Did Did Did, Did Up, So Did,, N, Did, Did N, Did | One, Did Did Did, Did Did Did Did Did, One, Did, Did N, Me Did, Up, Did Up, Did, Did So Did So Did You,, So In, In, Done So | Me, So, Me,, Civil,, Pr So Did Did Did, In, Me, So You, Did Back, Did Did Did Did You, Back Back So Did Did Did Did You Did Did So Did Did Did Did Did, Did So Did Did Did Did Did Did Did Did Did Did Did Did So Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did | Did Did Did So Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did Did | Did Did Did Did Did Did Did |Did Me Did Did Did Did Did Did Did Did Did |Did Did Did DidDid Did | Did | Did | Did Did |D Did Did Did Did DidDid Did Did So Did Did Did Did Did |D Did Did Did Did | Did Did So Did DidDid Had Did Did Did So Did Had Did Did Did Did Did Did Did Did Did Did Did SoDid Had Did Did |DidDid Did Did | | Did |Did Did Did Did Did Did Did Did Did Did Did Did Did |Did Did Did Did So DidDid | | Did Did Did Did Did Did DidDid Did Did DidDid Did Did Did So Did Did Did Did Did Did Did Did Did | Did Did Did So |Did | Did Did |Did | Did So | So Did Did | Did Did Did Did Did Did Did Did Did Did Did Did | Did Did Did Did Did Did Did Did Did Did |B | Did Did Did Did Did Did Did Did Did Did Did Did Did | Did SoDid So Did Did | Did | So | Did Did | Did Did | |DidSo |So | Did | Did Did | So Did Did Did Did | Did So Did Did Did Did | Did | Did Did So | Did Did Did | Did | | Did Did Did | | Did Did Did | |So Did Did | Did So Did Did Did So Did | Did | Did |Did | | | Did | | Did So | So | Did Did Did | Did | Did | Did So Did | Did | | Did | Did Did So Did | Did Did Did Did Did | | | | Did Did Did | | Did Did Did Did | Did | Did | Did So Did | Did | Did | | | | So | Did | Did | Did | So Did | Did | | | | | | So | | | |So | | So Did Did | | Did | | | | Did So Did Did | | | | | Did Did | Did | | | | | So | | Did | | Did | | | | | | | | Did | | | | | Did | | | | | | | | | | | So | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |, | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |, | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
有没有可能是显卡型号的问题或依赖版本的问题?以下是我的运行配置:
python 3.8.10 , torch 2.0.1,cuda:12.4,显卡:L20(48G显存)
还有个问题就是我在运行上述代码的时候花了5min以上,不知道这个推理时间是否正常,望解答,谢谢!

你好,我也遇到了这个问题请问您解决了吗?

Sign up or log in to comment