x54-729 commited on
Commit
75131b4
1 Parent(s): cfdbca8

fix README

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -64,14 +64,14 @@ We have evaluated InternLM2 on several important benchmarks using the open-sourc
64
 
65
  ### Import from Transformers
66
 
67
- To load the InternLM2 1.8B Chat SFT model using Transformers, use the following code:
68
 
69
  ```python
70
  import torch
71
  from transformers import AutoTokenizer, AutoModelForCausalLM
72
- tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-chat-1_8b-sft", trust_remote_code=True)
73
  # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and cause OOM Error.
74
- model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-chat-1_8b-sft", torch_dtype=torch.float16, trust_remote_code=True).cuda()
75
  model = model.eval()
76
  response, history = model.chat(tokenizer, "hello", history=[])
77
  print(response)
@@ -86,7 +86,7 @@ The responses can be streamed using `stream_chat`:
86
  import torch
87
  from transformers import AutoModelForCausalLM, AutoTokenizer
88
 
89
- model_path = "internlm/internlm2-chat-1_8b-sft"
90
  model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
91
  tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
92
 
@@ -106,7 +106,7 @@ The code is licensed under Apache-2.0, while model weights are fully open for ac
106
 
107
  - InternLM2-1.8B: 具有高质量和高适应灵活性的基础模型,为下游深度适应提供了良好的起点。
108
  - InternLM2-Chat-1.8B-SFT:在 InternLM2-1.8B 上进行监督微调 (SFT) 后得到的对话模型。
109
- - InternLM2-chat-1.8B:通过在线 RLHF 在 InternLM2-Chat-1.8B-SFT 之上进一步对齐。 InternLM2-Chat-1.8B 表现出更好的指令跟随、聊天体验和函数调用,推荐下游应用程序使用。
110
 
111
  InternLM2 模型具备以下的技术特点
112
 
@@ -137,7 +137,7 @@ InternLM2 模型具备以下的技术特点
137
 
138
  ### 通过 Transformers 加载
139
 
140
- 通过以下的代码加载 InternLM2 1.8B Chat SFT 模型
141
 
142
  ```python
143
  import torch
 
64
 
65
  ### Import from Transformers
66
 
67
+ To load the InternLM2 1.8B Chat model using Transformers, use the following code:
68
 
69
  ```python
70
  import torch
71
  from transformers import AutoTokenizer, AutoModelForCausalLM
72
+ tokenizer = AutoTokenizer.from_pretrained("internlm/internlm2-chat-1_8b", trust_remote_code=True)
73
  # Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and cause OOM Error.
74
+ model = AutoModelForCausalLM.from_pretrained("internlm/internlm2-chat-1_8b", torch_dtype=torch.float16, trust_remote_code=True).cuda()
75
  model = model.eval()
76
  response, history = model.chat(tokenizer, "hello", history=[])
77
  print(response)
 
86
  import torch
87
  from transformers import AutoModelForCausalLM, AutoTokenizer
88
 
89
+ model_path = "internlm/internlm2-chat-1_8b"
90
  model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
91
  tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
92
 
 
106
 
107
  - InternLM2-1.8B: 具有高质量和高适应灵活性的基础模型,为下游深度适应提供了良好的起点。
108
  - InternLM2-Chat-1.8B-SFT:在 InternLM2-1.8B 上进行监督微调 (SFT) 后得到的对话模型。
109
+ - InternLM2-Chat-1.8B:通过在线 RLHF 在 InternLM2-Chat-1.8B-SFT 之上进一步对齐。 InternLM2-Chat-1.8B表现出更好的指令跟随、聊天体验和函数调用,推荐下游应用程序使用。
110
 
111
  InternLM2 模型具备以下的技术特点
112
 
 
137
 
138
  ### 通过 Transformers 加载
139
 
140
+ 通过以下的代码加载 InternLM2 1.8B Chat 模型
141
 
142
  ```python
143
  import torch