--- language: - en license: cc-by-nc-sa-4.0 pipeline_tag: text-generation base_model: tiiuae/falcon-7b tags: - pretrained - conversational widget: - text: |- - Hello Alice, what are you cooking for us today? - Hello Bob, example_title: Request for a recipe group: Dash - text: |- [Intervenant 1:] Hello Alice, what are you cooking for us today? [Intervenant 2:] Hello Bob, example_title: Request for a recipe group: Intervenant - text: |- [Camille:] Hello Alice, what are you cooking for us today? [Dominique:] Hello Bob, example_title: Request for a recipe group: FirstName - text: |- [Bob Brown:] Hello Alice, what are you cooking for us today? [Alice Green:] Hello Bob, example_title: Request for a recipe group: Named inference: parameters: temperature: 1 max_new_tokens: 200 top_k: 10 datasets: - OpenLLM-France/Claire-Dialogue-English-0.1 --- # Claire-7B-EN-0.1 **Claire-7B-EN-0.1 is a 7B parameter causal decoder-only model built by [LINAGORA](https://labs.linagora.com/) with the support of [OpenLLM-France](https://github.com/OpenLLM-France)** **adapted from [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) on English conversational data.** Claire-7B-EN-0.1 is a pretrained language model designed to be attuned to the dynamics of linguistic interactions in dialogue. Without further training, its expected use is to generate continuations of dialogues. Its main purpose is to serve as a base model for fine-tuning on dialogue generation (e.g., chat) and dialogue understanding (e.g., meeting summarization) tasks. Please note that due to its training, the model is prone to generate dialogues with disfluencies and other constructions common to spoken language. * [Typical usage](#typical-usage) * [Typical prompts](#typical-prompts) * [Training Details](#training-details) * [Training Data](#training-data) * [Training Procedure](#training-procedure) * [Variants](#variants) * [License](#license) * [Acknowledgements](#acknowledgements) * [Contact](#contact) ## Typical usage ```python import transformers import torch model_name = "OpenLLM-France/Claire-7B-EN-0.1" tokenizer = transformers.AutoTokenizer.from_pretrained(model_name) model = transformers.AutoModelForCausalLM.from_pretrained(model_name, device_map="auto", torch_dtype=torch.bfloat16, load_in_4bit=True # For efficient inference, if supported by the GPU card ) pipeline = transformers.pipeline("text-generation", model=model, tokenizer=tokenizer) generation_kwargs = dict( num_return_sequences=1, # Number of variants to generate. return_full_text= False, # Do not include the prompt in the generated text. max_new_tokens=200, # Maximum length for the output text. do_sample=True, top_k=10, temperature=1.0, # Sampling parameters. pad_token_id=tokenizer.eos_token_id, # Just to avoid a harmless warning. ) prompt = """\ - Hello Alice, what are you cooking for us today? - Hello Bob,\ """ completions = pipeline(prompt, **generation_kwargs) for completion in completions: print(prompt + " […]" + completion['generated_text']) ``` This will print something like: ``` - Hello Alice, what are you cooking for us today? - Hello Bob, […] I'm going to make beef and vegetables. - That sounds great. What type of vegetables are you going to make? - I'm thinking of making a broccoli salad and steamed potatoes. - I love broccoli and potatoes, especially together. Do you plan to make a dressing or a mayo for the broccoli? - Yes, I have to make a dressing. How about some mayo for the potatoes? - I don't know if I like the sound of that, but go for it. You're the chef! I'll try some. - I'm sure you will. - I'll try some. ``` You will need at least 6GB of VRAM to run inference using 4bit quantization (16GB of VRAM without 4bit quantization). If you have trouble running this code, make sure you have recent versions of `torch`, `transformers` and `accelerate` (see [requirements.txt](requirements.txt)). ### Typical prompts Claire-7B-EN-0.1 was trained on English conversations. During training, the dialogues were normalized in several formats. The possible formats for expected prompts are as follows: A monologue can be specified as a single line prompt (though keep in mind that Claire might still return a dialogue because of its training): ```python prompt = "Ladies and gentlemen, welcome aboard the S.S. Anne! We will be leaving in" ``` A dialogue between two speakers can be specified with one line per speech turn starting with a dash: ```python prompt = """\ - Hello Alice, what are you cooking for us today? - Bonjour Camille,\ """ ``` A dialogue or multilogue (with two or more speakers) can be specified with lines that start with `[Speaker X:]` where `X` is a number: ```python prompt = """\ [Speaker 1:] Hello Alice, what are you cooking for us today? [Speaker 2:] Hello Bob,\ """ ``` A dialogue or multilogue with named speakers can be specified with lines that start with `[SpeakerName:]` where `SpeakerName` can be a first name, a first and a last name, a nickname, a title… ```python prompt = """\ [Bob:] Hello Alice, what are you cooking for us today? [Alice:] Hello Bob,\ """ ``` ## Training Details ### Training Data The training dataset is available at [OpenLLM-France/Claire-Dialogue-English-0.1](https://huggingface.co/datasets/OpenLLM-France/Claire-Dialogue-English-0.1) Claire-7B-EN-0.1 was tuned from Falcon-7b on the following data distribution: | **Data type** | **Words** | **Training Sampling Weight** | **Sources** | |-------------------------------|------------|------------------------------|-----------------------------------------------------| | Broadcast | 720M | 43% | MediaSum | | Parliamentary proceedings | 56M | 27% | Europarl | | Assistance | 53M | 13% | ReDial, OpenDialKG, ABCD, AirDialog, MULTIWOZ2_2, MulDoGO | | Misc | 10M | 10% | British National Corpus (BNC) | | Spoken dialogue | 4.7M | 4.6% | Charlotte, Switchboard | | Meetings | 1.5M | <2% | AMI, ICSI | | Free Chat | 3.6M | <1% | Chit-Chat, Daily Dialog | Training data was augmented with the following techniques: * varying the format used to indicate speech turns (dashes or [XXX:]) * substituting [Speaker X:] for [SpeakerName:] or vice versa, where [SpeakerName:] might be a real name or a randomly generated name * removing punctuation marks and/or casing (to prepare the model for transcripts produced by some Automatic Speech Recognition systems) Long conversations were truncated at a maximum of 2048 tokens. Where possible, they were split between speaker turns. While the model has been trained and evaluated only on English dialogues, it may be able to generate conversations in other languages from the original Falcon-7b training data. ### Training Procedure The training code is available at [https://github.com/OpenLLM-France/Lit-Claire](https://github.com/OpenLLM-France/Lit-Claire). Claire-7B-EN-0.1 is a causal decoder-only model trained on a causal language modeling task (i.e., predict the next token). See [Falcon-7b](https://huggingface.co/tiiuae/falcon-7b) for more details. Claire-7B-EN-0.1 was trained on 1 A100 80GB GPU for about 50 GPU hours. Hyperparameters were the following: | **Hyperparameter** | **Value** | |--------------------|------------| | Precision | `bfloat16` | | Optimizer | AdamW | | Learning rate | 1e-4 | | Weight decay | 1e-2 | | Batch size | 132 | | LoRA rank | 16 | | LoRA alpha | 32 | | Dropout | 0.05 | | gradient clipping | 1 | ## Variants Claire-7B-EN-0.1 is finetuned only on English dialogue data, but the following variants are available to evaluate the impact of language mixture on dialogue understanding. * [Claire-7B-FR-EN-25-75](https://huggingface.co/OpenLLM-France/Claire-7B-FR-EN-25-75-0.1), with 25/75 French-English data split. * [Claire-7B-FR-EN-50-50](https://huggingface.co/OpenLLM-France/Claire-7B-FR-EN-50-50-0.1), with 50/50 French-English data split. * [Claire-7B-FR-EN-75-25](https://huggingface.co/OpenLLM-France/Claire-7B-FR-EN-75-25-0.1), with 75/25 French-English data split. * [Claire-7B](https://huggingface.co/OpenLLM-France/Claire-7B-0.1), with only French data. ## License Given that some of the corpora used for training are only available under CC-BY-NC-SA licenses, Claire-7B-EN-0.1 is made available under the [CC-BY-NC-SA 4.0 license](https://creativecommons.org/licenses/by-nc-sa/4.0/). ## Citation When using the Claire family of models, please cite the following paper: Jérôme Louradour, Julie Hunter, Ismaïl Harrando, Guokan Shang, Virgile Rennard & Jean-Pierre Lorré (2024). [Claire: Large Language Models for Spontaneous French Dialogue](https://aclanthology.org/2024.jeptalnrecital-taln.36.pdf). In _Actes de la 31ème Conférence sur le Traitement Automatique des Langues Naturelles, volume 1: articles longs et prises de position_ (pp. 530-548). ```bibtex @inproceedings{louradour2024claire, title={Claire: Large Language Models for Spontaneous French Dialogue}, author={Louradour, J{\'e}r{\^o}me and Hunter, Julie and Harrando, Isma{\"\i}l and Shang, Guokan and Rennard, Virgile and Lorr{\'e}, Jean-Pierre}, booktitle={Actes de la 31{\`e}me Conf{\'e}rence sur le Traitement Automatique des Langues Naturelles, volume 1: articles longs et prises de position}, pages={530--548}, year={2024} } ``` ## Acknowledgements This work was performed using HPC resources from GENCI–IDRIS (Grant 2023-AD011014561). Claire-7B-EN-0.1 was created by members of [LINAGORA](https://labs.linagora.com/). Special thanks to partners from the OpenLLM-France community, especially Christophe Cerisara (LORIA), Pierre-Carl Langlais and Anastasia Stasenko (OpSci), and Pierre Colombo, for valuable advice. ## Contact contact@openllm-france.fr