Text Generation
Transformers
Safetensors
Czech
mpt
custom_code
text-generation-inference
Inference Endpoints
mfajcik commited on
Commit
ec85dac
1 Parent(s): ce06b0d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -6,7 +6,7 @@ language:
6
  - cs
7
  ---
8
  # Introduction
9
- CSMPT7b is a large Czech language model continously pretrained for 272b training tokens from English [MPT7b](https://huggingface.co/mosaicml/mpt-7b) model. Model was pretrained on ~67b token [Large Czech Collection](https://huggingface.co/datasets/BUT-FIT/but_lcc) using Czech tokenizer, obtained using our vocabulary swap method (see below).
10
  Training was done on [Karolina](https://www.it4i.cz/en) cluster.
11
 
12
  # Evaluation
 
6
  - cs
7
  ---
8
  # Introduction
9
+ CSMPT7b is a large Czech language model continously pretrained on 272b training tokens from English [MPT7b](https://huggingface.co/mosaicml/mpt-7b) model. Model was pretrained on ~67b token [Large Czech Collection](https://huggingface.co/datasets/BUT-FIT/but_lcc) using Czech tokenizer, obtained using our vocabulary swap method (see below).
10
  Training was done on [Karolina](https://www.it4i.cz/en) cluster.
11
 
12
  # Evaluation