PY007 commited on
Commit
3a2f04d
β€’
1 Parent(s): 0fc23c5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -2
README.md CHANGED
@@ -1,13 +1,12 @@
1
  <div align="center">
2
 
3
  # TinyLlama-1.1B
4
- English | [δΈ­ζ–‡](README_zh-CN.md)
5
  </div>
6
 
7
  The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs πŸš€πŸš€. The training has started on 2023-09-01.
8
 
9
  <div align="center">
10
- <img src=".github/TinyLlama_logo.png" width="300"/>
11
  </div>
12
 
13
  We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.
 
1
  <div align="center">
2
 
3
  # TinyLlama-1.1B
 
4
  </div>
5
 
6
  The TinyLlama project aims to **pretrain** a **1.1B Llama model on 3 trillion tokens**. With some proper optimization, we can achieve this within a span of "just" 90 days using 16 A100-40G GPUs πŸš€πŸš€. The training has started on 2023-09-01.
7
 
8
  <div align="center">
9
+ <img src="TinyLlama_logo.png" width="300"/>
10
  </div>
11
 
12
  We adopted exactly the same architecture and tokenizer as Llama 2. This means TinyLlama can be plugged and played in many open-source projects built upon Llama. Besides, TinyLlama is compact with only 1.1B parameters. This compactness allows it to cater to a multitude of applications demanding a restricted computation and memory footprint.