TommyZQ commited on
Commit
361e5ab
1 Parent(s): 11597a6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +94 -27
README.md CHANGED
@@ -1,38 +1,105 @@
1
  ---
 
 
 
 
 
2
  license: apache-2.0
3
  ---
4
 
5
- ## Model Details
6
- - **Model name:** TM-1B
7
- - **Model version:** 1.0
8
- - **Developed by:** [Development Team or Organization Name]
9
- - **Model type:** [e.g., Machine Translation, Text Classification, etc.]
10
- - **Model framework:** [e.g., TensorFlow, PyTorch, etc.]
11
- - **Training data:** [Description of the dataset(s) used for training]
12
- - **Validation data:** [Description of the dataset(s) used for validation]
13
 
14
- ## Intended Use
15
- - **Primary intended users:** [Who should be using this model - e.g., data scientists, developers]
16
- - **Out-of-scope use cases:** [List any use cases that are not recommended]
17
 
18
- ## Model Performance
19
- - **Metrics:** [Description of the metrics used to evaluate model performance]
20
- - **Evaluation results:** [Summary of the model's performance based on the chosen metrics]
21
 
22
- ## Ethical Considerations
23
- - **Bias detection:** [Any steps taken to address potential bias in the training data]
24
- - **Fairness assessment:** [Description of fairness assessment methods and results if applicable]
25
 
26
- ## Caveats and Recommendations
27
- - **Known limitations:** [List known limitations of the model]
28
- - **Best practices:** [Suggestions on best practices for implementation of the model]
29
 
30
- ## Change Log
31
- - **[Date]:** Model version 1.0 released.
32
 
33
- ## Contact Information
34
- - **Maintainer(s):** [Contact details for the person or team responsible for maintaining the model]
35
- - **Issues:** [Information on where to report issues or bugs]
36
 
37
- ## License
38
- - **Model license:** [Details of the model's usage license, if applicable]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - code
7
  license: apache-2.0
8
  ---
9
 
 
 
 
 
 
 
 
 
10
 
 
 
 
11
 
12
+ # **csg-wukong-1B** [[中文]](#chinese) [[English]](#english)
 
 
13
 
14
+ <a id="english"></a>
 
 
15
 
16
+ <p align="center">
17
+ <img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg">
18
+ </p>
19
 
20
+ <p align="center"><a href="https://portal.opencsg.com/models">[OpenCSG Community]</a> <a href="https://github.com/opencsgs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[wechat]</a> <a href="https://twitter.com/OpenCsg">[Twitter]</a> </p>
 
21
 
 
 
 
22
 
23
+ </div>
24
+ OpenCSG stands for Converged resources, Software refinement, and Generative LM. The 'C' represents Converged resources, indicating the integration and full utilization of hybrid resources. The 'S' stands for Software refinement, signifying software that is refined by large models. The 'G' represents Generative LM, which denotes widespread, inclusive, and democratized generative large models.
25
+
26
+ The vision of OpenCSG is to empower every industry, every company, and every individual to own their models. We adhere to the principles of openness and open source, making the large model software stack of OpenCSG available to the community. We welcome everyone to use, send feedback, and contribute collaboratively.
27
+
28
+
29
+
30
+
31
+
32
+ ## Model Description
33
+
34
+ csg-wukong-1B is a 1 billion-parameter small language model(SLM) pretrained on 1T tokens.
35
+
36
+ <br>
37
+
38
+
39
+
40
+
41
+ # Training
42
+
43
+ ## Hardware
44
+
45
+ - **GPUs:** 16 H800
46
+ - **Training time:** 34
47
+
48
+ ## Software
49
+
50
+ - **Orchestration:** [Deepspeed](https://github.com/OpenCSGs)
51
+ - **Neural networks:** [PyTorch](https://github.com/pytorch/pytorch)
52
+ - **BP16 if applicable:** [apex](https://github.com/NVIDIA/apex)
53
+
54
+
55
+ <a id="chinese"></a>
56
+
57
+ <p>
58
+
59
+ </p>
60
+
61
+ # OpenCSG介绍
62
+
63
+
64
+ <p align="center">
65
+ <img width="300px" alt="OpenCSG" src="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/GwYXPKuEoGCGcMICeW-sb.jpeg">
66
+ </p>
67
+
68
+ <p align="center"><a href="https://opencsg.com/models">[OpenCSG 社区]</a> <a href="https://github.com/opencsgs">[github]</a> <a href="https://cdn-uploads.huggingface.co/production/uploads/64c71b27d43e4dee51a8b31a/HU6vz21qKTEmUBCWqCFh9.jpeg">[微信]</a> <a href="https://twitter.com/OpenCsg">[推特]</a> </p>
69
+
70
+
71
+
72
+ </div>
73
+ OpenCSG中 Open是开源开放;C 代表 Converged resources,整合和充分利用的混合异构资源优势,算力降本增效;S 代表 Software refined,重新定义软件的交付方式,通过大模型驱动软件开发,人力降本增效;G 代表 Generative LM,大众化、普惠化和民主化的可商用的开源生成式大模型。
74
+
75
+ OpenCSG的愿景是让每个行业、每个公司、每个人都拥有自己的模型。 我们坚持开源开放的原则,将OpenCSG的大模型软件栈开源到社区,欢迎使用、反馈和参与共建,欢迎关注。
76
+
77
+
78
+
79
+ ## 模型介绍
80
+
81
+
82
+ csg-wukong-1B 是一个1B参数量的小语言模型,该模型训练了1T tokens.
83
+
84
+ <br>
85
+
86
+ 这是基于 [phi-2](https://huggingface.co/microsoft/phi-2) 进行微调的模型版本。
87
+
88
+ | 模型大小 | 基座模型 |
89
+ | --- | ----------------------------------------------------------------------------- |
90
+ | 2.7B | [opencsg/Opencsg-phi-2-v0.1](https://huggingface.co/opencsg/opencsg-phi-2-v0.1) |
91
+ | opencsg-stable-coder-3b-v1 |[opencsg/Opencsg-stable-coder-3b-v1](https://huggingface.co/opencsg/opencsg-stable-code-3b-v1)|
92
+
93
+ ```
94
+ # 训练
95
+
96
+ ## 硬件资源
97
+
98
+ - **GPU数量:** 16 H800
99
+ - **训练时间:** 34天
100
+
101
+ ## 软件使用
102
+
103
+ - **微调训练框架:** [Deepspeed](https://github.com/OpenCSGs)
104
+ - **深度学习框架:** [PyTorch](https://github.com/pytorch/pytorch)
105
+ - **BP16:** [apex](https://github.com/NVIDIA/apex)