File size: 13,656 Bytes
74c014c
 
45666c8
74c014c
 
 
 
 
 
 
6c56f2f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74c014c
6c56f2f
 
 
 
 
 
74c014c
 
45666c8
74c014c
aa4dfc0
45666c8
74c014c
45666c8
74c014c
45666c8
bfb35c4
74c014c
 
 
 
 
 
45666c8
0702252
74c014c
 
 
 
 
 
 
 
 
 
 
45666c8
74c014c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45666c8
 
74c014c
 
 
 
 
 
 
45666c8
 
 
 
 
 
 
 
 
74c014c
 
 
 
 
 
 
 
45666c8
74c014c
 
 
45666c8
74c014c
 
 
 
 
45666c8
74c014c
 
 
 
45666c8
74c014c
 
45666c8
 
74c014c
 
 
 
 
 
 
45666c8
 
74c014c
 
 
 
 
 
 
45666c8
 
74c014c
45666c8
74c014c
 
 
45666c8
 
 
 
 
 
 
 
74c014c
 
 
45666c8
74c014c
 
 
 
45666c8
74c014c
 
 
 
 
 
45666c8
74c014c
 
45666c8
74c014c
 
 
 
 
 
 
 
 
 
 
 
 
 
45666c8
74c014c
 
45666c8
74c014c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
45666c8
74c014c
0702252
 
 
6bbd5d5
 
 
 
0702252
 
74c014c
 
 
45666c8
74c014c
45666c8
74c014c
78e93a4
 
 
 
9e1becb
78e93a4
 
 
 
 
 
 
 
 
 
 
74c014c
 
 
 
9e1becb
74c014c
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
---
language:
- en
license: cc-by-nc-sa-4.0
pipeline_tag: text-generation
base_model: tiiuae/falcon-7b
tags:
- pretrained
- conversational
widget:
- text: |-
    - Hello Alice, what are you cooking for us today?
    - Hello Bob,
  example_title: Request for a recipe
  group: Dash
- text: |-
    [Intervenant 1:] Hello Alice, what are you cooking for us today?
    [Intervenant 2:] Hello Bob,
  example_title: Request for a recipe
  group: Intervenant
- text: |-
    [Camille:] Hello Alice, what are you cooking for us today?
    [Dominique:] Hello Bob,
  example_title: Request for a recipe
  group: FirstName
- text: |-
    [Bob Brown:] Hello Alice, what are you cooking for us today?
    [Alice Green:] Hello Bob,
  example_title: Request for a recipe
  group: Named
inference:
  parameters:
    temperature: 1
    max_new_tokens: 200
    top_k: 10
datasets:
- OpenLLM-France/Claire-Dialogue-English-0.1
---

# Claire-7B-EN-0.1

**Claire-7B-EN-0.1 is a 7B parameter causal decoder-only model built by [LINAGORA](https://labs.linagora.com/) with the support of [OpenLLM-France](https://github.com/OpenLLM-France)**
**adapted from [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on English conversational data.**

<!-- Quantized versions in GGUF format can be found in [TheBloke/Claire-7B-0.1-GGUF](https://huggingface.co/TheBloke/Claire-7B-0.1-GGUF). -->

Claire-7B-EN-0.1 is a pretrained language model designed to be attuned to the dynamics of linguistic interactions in dialogue. Without further training, its expected use is to generate continuations of dialogues. Its main purpose is to serve as a base model for fine-tuning on dialogue generation (e.g., chat) and dialogue understanding (e.g., meeting summarization) tasks. Please note that due to its training, the model is prone to generate dialogues with disfluencies and other constructions common to spoken language.


* [Typical usage](#typical-usage)
  * [Typical prompts](#typical-prompts)
* [Training Details](#training-details)
  * [Training Data](#training-data)
  * [Training Procedure](#training-procedure)
<!-- * [Evaluation](#evaluation) -->
* [Variants](#variants)
* [License](#license)
* [Acknowledgements](#acknowledgements)
* [Contact](#contact)


## Typical usage

```python
import transformers
import torch

model_name = "OpenLLM-France/Claire-7B-EN-0.1"

tokenizer = transformers.AutoTokenizer.from_pretrained(model_name)
model = transformers.AutoModelForCausalLM.from_pretrained(model_name,
    device_map="auto",
    torch_dtype=torch.bfloat16,
    load_in_4bit=True                          # For efficient inference, if supported by the GPU card
)

pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer)
generation_kwargs = dict(
    num_return_sequences=1,                    # Number of variants to generate.
    return_full_text= False,                   # Do not include the prompt in the generated text.
    max_new_tokens=200,                        # Maximum length for the output text.
    do_sample=True, top_k=10, temperature=1.0, # Sampling parameters.
    pad_token_id=tokenizer.eos_token_id,       # Just to avoid a harmless warning.
)

prompt = """\
- Hello Alice, what are you cooking for us today?
- Hello Bob,\
"""
completions = pipeline(prompt, **generation_kwargs)
for completion in completions:
    print(prompt + " […]" + completion['generated_text'])
```
This will print something like:
```
- Hello Alice, what are you cooking for us today?
- Hello Bob, […] I'm going to make beef and vegetables.
- That sounds great. What type of vegetables are you going to make?
- I'm thinking of making a broccoli salad and steamed potatoes.
- I love broccoli and potatoes, especially together. Do you plan to make a dressing or a mayo for the broccoli?
- Yes, I have to make a dressing. How about some mayo for the potatoes?
- I don't know if I like the sound of that, but go for it. You're the chef! I'll try some.
- I'm sure you will.
- I'll try some.
```

You will need at least 6GB of VRAM to run inference using 4bit quantization (16GB of VRAM without 4bit quantization).

If you have trouble running this code, make sure you have recent versions of `torch`, `transformers` and `accelerate` (see [requirements.txt](requirements.txt)).

### Typical prompts

Claire-7B-EN-0.1 was trained on English conversations. During training, the dialogues were normalized in several formats. The possible formats for expected prompts are as follows:

A monologue can be specified as a single line prompt (though keep in mind that Claire might still return a dialogue because of its training):
```python
prompt = "Ladies and gentlemen, welcome aboard the S.S. Anne! We will be leaving in"
```

A dialogue between two speakers can be specified with one line per speech turn starting with a dash:
```python
prompt = """\
- Hello Alice, what are you cooking for us today?
- Bonjour Camille,\
"""
```

A dialogue or multilogue (with two or more speakers) can be specified with lines that start with `[Speaker X:]` where `X` is a number:
```python
prompt = """\
[Speaker 1:] Hello Alice, what are you cooking for us today?
[Speaker 2:] Hello Bob,\
"""
```

A dialogue or multilogue with named speakers can be specified with lines that start with `[SpeakerName:]`
where `SpeakerName` can be a first name, a first and a last name, a nickname, a title…
```python
prompt = """\
[Bob:] Hello Alice, what are you cooking for us today?
[Alice:] Hello Bob,\
"""
```

## Training Details

### Training Data

The training dataset is available at [OpenLLM-France/Claire-Dialogue-English-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1)
<!-- and described in ["The Claire French Dialogue Dataset" (2023)](https://arxiv.org/abs/2311.16840). -->

Claire-7B-EN-0.1 was tuned from Falcon-7b on the following data distribution:

| **Data type**                 | **Words**  | **Training Sampling Weight** | **Sources**                                         |
|-------------------------------|------------|------------------------------|-----------------------------------------------------|
| Broadcast                     | 720M       | 43%                          | MediaSum                                            |
| Parliamentary proceedings     |  56M       | 27%                          | Europarl                                            |
| Assistance                    |  53M       | 13%                          | ReDial, OpenDialKG, ABCD, AirDialog, MULTIWOZ2_2, MulDoGO |
| Misc                          |  10M       | 10%                          | British National Corpus (BNC)                       |
| Spoken dialogue               |   4.7M     | 4.6%                         | Charlotte, Switchboard                              |
| Meetings                      |   1.5M     | <2%                          | AMI, ICSI                                           |
| Free Chat                     |   3.6M     | <1%                          | Chit-Chat, Daily Dialog                             |


Training data was augmented with the following techniques:
* varying the format used to indicate speech turns (dashes or [XXX:])
* substituting [Speaker X:] for [SpeakerName:] or vice versa, where [SpeakerName:] might be a real name or a randomly generated name
* removing punctuation marks and/or casing (to prepare the model for transcripts produced by some Automatic Speech Recognition systems)

Long conversations were truncated at a maximum of 2048 tokens. Where possible, they were split between speaker turns.

While the model has been trained and evaluated only on English dialogues, it may be able to generate conversations in other languages from the original Falcon-7b training data.


### Training Procedure 

The training code is available at [https://github.com/OpenLLM-France/Lit-Claire](https://github.com/OpenLLM-France/Lit-Claire).

Claire-7B-EN-0.1 is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token).
See [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) for more details.

Claire-7B-EN-0.1 was trained on 1 A100 80GB GPU for about 50 GPU hours.

Hyperparameters were the following:
| **Hyperparameter** | **Value**  |
|--------------------|------------|
| Precision          | `bfloat16` |
| Optimizer          | AdamW      |
| Learning rate      | 1e-4       |
| Weight decay       | 1e-2       |
| Batch size         | 132        |
| LoRA rank          | 16         |
| LoRA alpha         | 32         |
| Dropout            | 0.05       |
| gradient clipping  | 1          |

<!--
## Evaluation

To evaluate Claire-7B-EN-0.1’s ability to generate natural sounding, French conversations, we compared its responses to a variety of prompts with those of three other models:
* [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b),
* [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) 
* [Claire-Mistral-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-Mistral-7B-0.1) (a version of Mistral-7B-v0.1 adapted in the same fashion as Claire-7B-0.1)

We tested an even mixture of monologue and dialogue-style prompts.
Each of the four generated responses was evaluated along three dimensions:
Interaction, Fluency and Relevance.
Evaluators were also asked to rank the four responses by preference.

Our results confirm that continual pre-training of Falcon-7b and Mistral-7B-v0.1 leads to improvement (relative to the base models) along all three evaluation dimensions and that Claire-7B-0.1 outperforms the adapted Mistral counterpart in the Fluency and Relevance categories
(and in the Interaction category if we focus on dialogue-style prompts).

Ranking results also reveal a clear subjective preference for Claire-7B-0.1,
as shown in the following table:
| | <span style="font-weight: normal">... over</span><br /> **Claire-Falcon** | <span style="font-weight: normal">... over</span><br /> **Claire-Mistral** | <span style="font-weight: normal">... over</span><br /> **Falcon** | <span style="font-weight: normal">... over</span><br /> **Mistral** |
|--------------------------------------|----------------------|-----------------------|---------------|---------------------|
| prefer<br /> **Claire-Falcon** ...  |                      | **62.2%**             | **63.9%**     | **83.8%**           |
| prefer<br /> **Claire-Mistral** ... | _34.8%_              |                       | **56.2%**     | **75.3%**           |
| prefer<br /> **Falcon** ...         | _36.1%_              | _43.8%_               |               | **81.4%**           |
| prefer<br /> **Mistral** ...        | _16.2%_              | _24.7%_               | _18.6%_       |                     |

(In this table,
"Claire-Falcon" stands for Claire-7B-0.1,
"Falcon", for [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b),
"Mistral", for [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1)
and "Claire-Mistral", for [Claire-Mistral-7B-0.1](https://huggingface.co/OpenLLM-France/Claire-Mistral-7B-0.1).)

Please note that the model can generate disfluencies and humorous responses as a result of its training on spoken and theatrical text. 
-->

## Variants

Claire-7B-EN-0.1 is finetuned only on English dialogue data, but the following variants are available to evaluate the impact of language mixture on dialogue understanding.
* [Claire-7B-FR-EN-25-75](https://huggingface.co/OpenLLM-France/Claire-7B-FR-EN-25-75-0.1), with 25/75 French-English data split.
* [Claire-7B-FR-EN-50-50](https://huggingface.co/OpenLLM-France/Claire-7B-FR-EN-50-50-0.1), with 50/50 French-English data split.
* [Claire-7B-FR-EN-75-25](https://huggingface.co/OpenLLM-France/Claire-7B-FR-EN-75-25-0.1), with 75/25 French-English data split.
* [Claire-7B](https://huggingface.co/OpenLLM-France/Claire-7B-0.1), with only French data.


## License

Given that some of the corpora used for training are only available under CC-BY-NC-SA licenses,
Claire-7B-EN-0.1 is made available under the [CC-BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/).

<!-- You can find a variant of this model published under the Apache 2.0 license at [OpenLLM-France/Claire-7B-Apache-0.1](https://huggingface.co/OpenLLM-France/Claire-7B-Apache-0.1). -->

## Citation

When using the Claire family of models, please cite the following paper:

Jérôme Louradour, Julie Hunter, Ismaïl Harrando, Guokan Shang, Virgile Rennard & Jean-Pierre Lorré (2024). [Claire: Large Language Models for Spontaneous French Dialogue](https://aclanthology.org/2024.jeptalnrecital-taln.36.pdf). In _Actes de la 31ème Conférence sur le Traitement Automatique des Langues Naturelles, volume 1: articles longs et prises de position_ (pp. 530-548).

```bibtex
@inproceedings{louradour2024claire,
  title={Claire: Large Language Models for Spontaneous French Dialogue},
  author={Louradour, J{\'e}r{\^o}me and Hunter, Julie and Harrando, Isma{\"\i}l and Shang, Guokan and Rennard, Virgile and Lorr{\'e}, Jean-Pierre},
  booktitle={Actes de la 31{\`e}me Conf{\'e}rence sur le Traitement Automatique des Langues Naturelles, volume 1: articles longs et prises de position},
  pages={530--548},
  year={2024}
}
```

## Acknowledgements

This work was performed using HPC resources from GENCI–IDRIS (Grant 2023-AD011014561). 

Claire-7B-EN-0.1 was created by members of [LINAGORA](https://labs.linagora.com/).

Special thanks to partners from the OpenLLM-France community, especially Christophe Cerisara (LORIA), Pierre-Carl Langlais and Anastasia Stasenko (OpSci), and Pierre Colombo, for valuable advice.

## Contact

contact@openllm-france.fr