x54-729 commited on
Commit
486ab72
1 Parent(s): 74d6d2f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -73,7 +73,7 @@ The table below compares the performance of mainstream open-source models on som
73
  Overall, InternLM-20B comprehensively outperforms open-source models in the 13B parameter range in terms of overall capabilities, and on inference evaluation sets, it approaches or even surpasses the performance of Llama-65B.
74
 
75
  ## Import from Transformers
76
- To load the InternLM 7B Chat model using Transformers, use the following code:
77
  ```python
78
  >>> from transformers import AutoTokenizer, AutoModelForCausalLM
79
  >>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-20b", trust_remote_code=True)
 
73
  Overall, InternLM-20B comprehensively outperforms open-source models in the 13B parameter range in terms of overall capabilities, and on inference evaluation sets, it approaches or even surpasses the performance of Llama-65B.
74
 
75
  ## Import from Transformers
76
+ To load the InternLM 20B model using Transformers, use the following code:
77
  ```python
78
  >>> from transformers import AutoTokenizer, AutoModelForCausalLM
79
  >>> tokenizer = AutoTokenizer.from_pretrained("internlm/internlm-20b", trust_remote_code=True)