study-hjt commited on
Commit
5aba63b
1 Parent(s): 5dce52e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -62,15 +62,15 @@ KeyError: 'qwen2'
62
  Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
63
 
64
  ```python
65
- from modelscope import AutoModelForCausalLM, AutoTokenizer
66
  device = "cuda" # the device to load the model onto
67
 
68
  model = AutoModelForCausalLM.from_pretrained(
69
- "huangjintao/Qwen1.5-110B-Chat-GPTQ-Int8",
70
  torch_dtype="auto",
71
  device_map="auto"
72
  )
73
- tokenizer = AutoTokenizer.from_pretrained("huangjintao/Qwen1.5-110B-Chat-GPTQ-Int8")
74
 
75
  prompt = "Give me a short introduction to large language model."
76
  messages = [
 
62
  Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
63
 
64
  ```python
65
+ from transformers import AutoModelForCausalLM, AutoTokenizer
66
  device = "cuda" # the device to load the model onto
67
 
68
  model = AutoModelForCausalLM.from_pretrained(
69
+ "study-hjt/Qwen1.5-110B-Chat-GPTQ-Int8",
70
  torch_dtype="auto",
71
  device_map="auto"
72
  )
73
+ tokenizer = AutoTokenizer.from_pretrained("study-hjt/Qwen1.5-110B-Chat-GPTQ-Int8")
74
 
75
  prompt = "Give me a short introduction to large language model."
76
  messages = [