christian-dynamofl commited on
Commit
2ffdf6d
1 Parent(s): 80d64ff
Files changed (1) hide show
  1. README.md +2 -0
README.md CHANGED
@@ -8,6 +8,8 @@ language:
8
  - it
9
  ---
10
 
 
 
11
  Dynamo 8B is an improvement of the Mistral-7B architecture for the purpose of multilingual language modeling. Dynamo 8B outperforms Mistral 7B, Llama2 13B, Bloom 7B, and PolyLM 13B on most multilingual benchmarks we tested (i.e. PAWS and XCOPA).
12
 
13
  It includes an extended tokenizer that was pretrained to better leverage tokens in different languages. The tokenizer was extended by training a sentence BPE tokenizer on selected languages (200M tokens were used per language) and then combined the merges/vocab that were not already present in the Mistral tokenizer. After the tokenizers were merged, the model was pretrained with an additional 210B tokens from multilingual data like German, Spanish, Korean, Italian, and Turkish texts. The pretraining dataset also incorporated English tokens to mitigate catastrophic forgetting.
 
8
  - it
9
  ---
10
 
11
+ # Dynamo 8B Model Card
12
+
13
  Dynamo 8B is an improvement of the Mistral-7B architecture for the purpose of multilingual language modeling. Dynamo 8B outperforms Mistral 7B, Llama2 13B, Bloom 7B, and PolyLM 13B on most multilingual benchmarks we tested (i.e. PAWS and XCOPA).
14
 
15
  It includes an extended tokenizer that was pretrained to better leverage tokens in different languages. The tokenizer was extended by training a sentence BPE tokenizer on selected languages (200M tokens were used per language) and then combined the merges/vocab that were not already present in the Mistral tokenizer. After the tokenizers were merged, the model was pretrained with an additional 210B tokens from multilingual data like German, Spanish, Korean, Italian, and Turkish texts. The pretraining dataset also incorporated English tokens to mitigate catastrophic forgetting.