zRzRzRzRzRzRzR commited on
Commit
c24133c
1 Parent(s): 76f3474
Files changed (2) hide show
  1. README.md +3 -0
  2. README_en.md +7 -2
README.md CHANGED
@@ -16,6 +16,9 @@ inference: false
16
 
17
  Read this in [English](README_en.md).
18
 
 
 
 
19
  GLM-4-9B 是智谱 AI 推出的最新一代预训练模型 GLM-4 系列中的开源版本。
20
  在语义、数学、推理、代码和知识等多方面的数据集测评中,GLM-4-9B 及其人类偏好对齐的版本 GLM-4-9B-Chat 均表现出较高的性能。
21
  除了能进行多轮对话,GLM-4-9B-Chat 还具备网页浏览、代码执行、自定义工具调用(Function Call)和长文本推理(支持最大 128K
 
16
 
17
  Read this in [English](README_en.md).
18
 
19
+ **2024/07/24,我们发布了与长文本相关的最新技术解读,关注 [这里](https://medium.com/@ChatGLM/glm-long-scaling-pre-trained-model-contexts-to-millions-caa3c48dea85) 查看我们在训练 GLM-4-9B 开源模型中关于长文本技术的技术报告**
20
+
21
+ ## 模型介绍
22
  GLM-4-9B 是智谱 AI 推出的最新一代预训练模型 GLM-4 系列中的开源版本。
23
  在语义、数学、推理、代码和知识等多方面的数据集测评中,GLM-4-9B 及其人类偏好对齐的版本 GLM-4-9B-Chat 均表现出较高的性能。
24
  除了能进行多轮对话,GLM-4-9B-Chat 还具备网页浏览、代码执行、自定义工具调用(Function Call)和长文本推理(支持最大 128K
README_en.md CHANGED
@@ -1,12 +1,16 @@
1
  # GLM-4-9B-Chat
2
 
 
 
 
 
3
  ## Model Introduction
4
 
5
  GLM-4-9B is the open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu
6
  AI. In the evaluation of data sets in semantics, mathematics, reasoning, code, and knowledge, **GLM-4-9B**
7
  and its human preference-aligned version **GLM-4-9B-Chat** have shown superior performance beyond Llama-3-8B. In
8
  addition to multi-round conversations, GLM-4-9B-Chat also has advanced features such as web browsing, code execution,
9
- custom tool calls (Function Call), and long text
10
  reasoning (supporting up to 128K context). This generation of models has added multi-language support, supporting 26
11
  languages including Japanese, Korean, and German. We have also launched the **GLM-4-9B-Chat-1M** model that supports 1M
12
  context length (about 2 million Chinese characters) and the multimodal model GLM-4V-9B based on GLM-4-9B.
@@ -68,7 +72,8 @@ on [Berkeley Function Calling Leaderboard](https://github.com/ShishirPatil/goril
68
 
69
  **For more inference code and requirements, please visit our [github page](https://github.com/THUDM/GLM-4).**
70
 
71
- **Please strictly follow the [dependencies](https://github.com/THUDM/GLM-4/blob/main/basic_demo/requirements.txt) to install, otherwise it will not run properly**
 
72
 
73
  ### Use the following method to quickly call the GLM-4-9B-Chat language model
74
 
 
1
  # GLM-4-9B-Chat
2
 
3
+ **On July 24, 2024, we released the latest technical interpretation related to long texts. Check
4
+ out [here](https://medium.com/@ChatGLM/glm-long-scaling-pre-trained-model-contexts-to-millions-caa3c48dea85) to view our
5
+ technical report on long context technology in the training of the open-source GLM-4-9B model.**
6
+
7
  ## Model Introduction
8
 
9
  GLM-4-9B is the open-source version of the latest generation of pre-trained models in the GLM-4 series launched by Zhipu
10
  AI. In the evaluation of data sets in semantics, mathematics, reasoning, code, and knowledge, **GLM-4-9B**
11
  and its human preference-aligned version **GLM-4-9B-Chat** have shown superior performance beyond Llama-3-8B. In
12
  addition to multi-round conversations, GLM-4-9B-Chat also has advanced features such as web browsing, code execution,
13
+ custom tool calls (Function Call), and long context
14
  reasoning (supporting up to 128K context). This generation of models has added multi-language support, supporting 26
15
  languages including Japanese, Korean, and German. We have also launched the **GLM-4-9B-Chat-1M** model that supports 1M
16
  context length (about 2 million Chinese characters) and the multimodal model GLM-4V-9B based on GLM-4-9B.
 
72
 
73
  **For more inference code and requirements, please visit our [github page](https://github.com/THUDM/GLM-4).**
74
 
75
+ **Please strictly follow the [dependencies](https://github.com/THUDM/GLM-4/blob/main/basic_demo/requirements.txt) to
76
+ install, otherwise it will not run properly**
77
 
78
  ### Use the following method to quickly call the GLM-4-9B-Chat language model
79