study-hjt commited on
Commit
cdfcd16
·
verified ·
1 Parent(s): f04d0bb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -56,15 +56,15 @@ KeyError: 'qwen2'.
56
  Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
57
 
58
  ```python
59
- from modelscope import AutoModelForCausalLM, AutoTokenizer
60
  device = "cuda" # the device to load the model onto
61
 
62
  model = AutoModelForCausalLM.from_pretrained(
63
- "huangjintao/CodeQwen1.5-7B-Chat-GPTQ-Int4",
64
  torch_dtype="auto",
65
  device_map="auto"
66
  )
67
- tokenizer = AutoTokenizer.from_pretrained("huangjintao/CodeQwen1.5-7B-Chat-GPTQ-Int4")
68
 
69
  prompt = "Write a quicksort algorithm in python."
70
  messages = [
 
56
  Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
57
 
58
  ```python
59
+ from transformers import AutoModelForCausalLM, AutoTokenizer
60
  device = "cuda" # the device to load the model onto
61
 
62
  model = AutoModelForCausalLM.from_pretrained(
63
+ "study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int4",
64
  torch_dtype="auto",
65
  device_map="auto"
66
  )
67
+ tokenizer = AutoTokenizer.from_pretrained("study-hjt/CodeQwen1.5-7B-Chat-GPTQ-Int4")
68
 
69
  prompt = "Write a quicksort algorithm in python."
70
  messages = [