OpenNLPLab commited on
Commit
0555270
1 Parent(s): f9edef6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -44,7 +44,7 @@ tags:
44
 
45
  # Introduction
46
 
47
- We are re-inventing the Large Language Model (LLM). This is the official implementation of TransNormerLLM in [link](https://github.com/OpenNLPLab/Transnormer). Our opened weights of TransNormerLLM are now accessible to individuals, creators, researchers and businesses of all sizes so that they can experiment, innovate and scale their ideas responsibly.
48
 
49
  Our release contains the TransNormerLLM model implementation, the open-source weights and the starting code for Supervised Fine-tuning (SFT). We will show examples on how to load [TransNormerLLM](https://github.com/OpenNLPLab/Transnormer) models, run SFT and inference on it.
50
 
@@ -53,9 +53,8 @@ Our release contains the TransNormerLLM model implementation, the open-source we
53
  - TransNormerLLM achieved competitive performance of its size on multiple well-approved Chinese, English, and multi-language general and domain-specific benchmarks.
54
  - This release includes **Base** versions with **385M**, **1B**, and **7B** parameters.
55
  - All versions are fully open to academic research. Developers only need to apply via email and obtain official commercial permission to use it for free commercially.
56
- - For more information, welcome reading our academic paper [TransNormerLLM](https://github.com/OpenNLPLab/Transnormer).
57
 
58
- ![](./images/TransNormerLLM-arch.png)
59
 
60
  # Released Weights
61
 
 
44
 
45
  # Introduction
46
 
47
+ We are re-inventing the Large Language Model (LLM). This is the official implementation of TransNormerLLM in [link](https://arxiv.org/pdf/2307.14995.pdf). Our opened weights of TransNormerLLM are now accessible to individuals, creators, researchers and businesses of all sizes so that they can experiment, innovate and scale their ideas responsibly.
48
 
49
  Our release contains the TransNormerLLM model implementation, the open-source weights and the starting code for Supervised Fine-tuning (SFT). We will show examples on how to load [TransNormerLLM](https://github.com/OpenNLPLab/Transnormer) models, run SFT and inference on it.
50
 
 
53
  - TransNormerLLM achieved competitive performance of its size on multiple well-approved Chinese, English, and multi-language general and domain-specific benchmarks.
54
  - This release includes **Base** versions with **385M**, **1B**, and **7B** parameters.
55
  - All versions are fully open to academic research. Developers only need to apply via email and obtain official commercial permission to use it for free commercially.
56
+ - For more information, welcome reading our academic paper [TransNormerLLM](https://arxiv.org/pdf/2307.14995.pdf).
57
 
 
58
 
59
  # Released Weights
60