IsmailH commited on
Commit
bfb35c4
1 Parent(s): 45666c8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -1
README.md CHANGED
@@ -37,12 +37,18 @@ inference:
37
 
38
  # Claire-7B-EN-0.1
39
 
40
- **Claire-7B-0.1 is a 7B parameter causal decoder-only model built by [LINAGORA](https://labs.linagora.com/) and [OpenLLM-France](https://github.com/OpenLLM-France)**
41
  **adapted from [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on English conversational data.**
42
 
43
  <!-- Quantized versions in GGUF format can be found in [TheBloke/Claire-7B-0.1-GGUF](https://huggingface.co/TheBloke/Claire-7B-0.1-GGUF). -->
44
 
45
  Claire-7B-EN-0.1 is a pretrained language model designed to be attuned to the dynamics of linguistic interactions in dialogue. Without further training, its expected use is to generate continuations of dialogues. Its main purpose is to serve as a base model for fine-tuning on dialogue generation (e.g., chat) and dialogue understanding (e.g., meeting summarization) tasks. Please note that due to its training, the model is prone to generate dialogues with disfluencies and other constructions common to spoken language.
 
 
 
 
 
 
46
 
47
  * [Typical usage](#typical-usage)
48
  * [Typical prompts](#typical-prompts)
 
37
 
38
  # Claire-7B-EN-0.1
39
 
40
+ **Claire-7B-EN-0.1 is a 7B parameter causal decoder-only model built by [LINAGORA](https://labs.linagora.com/) and [OpenLLM-France](https://github.com/OpenLLM-France)**
41
  **adapted from [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on English conversational data.**
42
 
43
  <!-- Quantized versions in GGUF format can be found in [TheBloke/Claire-7B-0.1-GGUF](https://huggingface.co/TheBloke/Claire-7B-0.1-GGUF). -->
44
 
45
  Claire-7B-EN-0.1 is a pretrained language model designed to be attuned to the dynamics of linguistic interactions in dialogue. Without further training, its expected use is to generate continuations of dialogues. Its main purpose is to serve as a base model for fine-tuning on dialogue generation (e.g., chat) and dialogue understanding (e.g., meeting summarization) tasks. Please note that due to its training, the model is prone to generate dialogues with disfluencies and other constructions common to spoken language.
46
+ Claire-7B-EN-0.1 is finetuned only on English dialogue data, but the following variants are available to evaluate the impact of language mixture on dialogue understanding.
47
+ * [Claire-7B-FR-EN-25-75](OpenLLM-France/Claire-7B-FR-EN-25-75-0.1), with 25/75 French-English data split.
48
+ * [Claire-7B-FR-EN-50-50](OpenLLM-France/Claire-7B-FR-EN-50-50-0.1), with 50/50 French-English data split.
49
+ * [Claire-7B-FR-EN-75-25](OpenLLM-France/Claire-7B-FR-EN-75-25-0.1), with 75/25 French-English data split.
50
+ * [Claire-FR-7B](OpenLLM-France/Claire-7B-0.1), with only French data.
51
+
52
 
53
  * [Typical usage](#typical-usage)
54
  * [Typical prompts](#typical-prompts)