nold
/

Text2Text Generation
GGUF
French
English
legal
code
text-generation-inference
art
nold commited on
Commit
44b6780
1 Parent(s): 09aed2d

920e15a45f733d4b97caf0e621ea040946d69e9b454bd8a798ce1bc772c49f74

Browse files
Files changed (3) hide show
  1. .gitattributes +1 -0
  2. CroissantLLMBase_Q8_0.gguf +3 -0
  3. README.md +79 -0
.gitattributes CHANGED
@@ -37,3 +37,4 @@ CroissantLLMBase_Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
37
  CroissantLLMBase_Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
38
  CroissantLLMBase_Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
  CroissantLLMBase_Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
 
 
37
  CroissantLLMBase_Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
38
  CroissantLLMBase_Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
39
  CroissantLLMBase_Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
40
+ CroissantLLMBase_Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
CroissantLLMBase_Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:750d35f25a7bf8e3f82eccd2c5bf32e692f3f106f243aa7c7b277e6c809c61bb
3
+ size 1430560960
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - cerebras/SlimPajama-627B
5
+ - uonlp/CulturaX
6
+ - pg19
7
+ - bigcode/starcoderdata
8
+ - croissantllm/croissant_dataset
9
+ language:
10
+ - fr
11
+ - en
12
+ pipeline_tag: text2text-generation
13
+ tags:
14
+ - legal
15
+ - code
16
+ - text-generation-inference
17
+ - art
18
+ ---
19
+
20
+ # CroissantLLM - Base (190k steps, Final version)
21
+
22
+ This model is part of the CroissantLLM initiative, and corresponds to the checkpoint after 190k steps (2.99 T) tokens.
23
+
24
+ To play with the final model, we recommend using the Chat version: https://huggingface.co/croissantllm/CroissantLLMChat-v0.1.
25
+
26
+
27
+ https://arxiv.org/abs/2402.00786
28
+
29
+
30
+
31
+ ## Abstract
32
+ We introduce CroissantLLM, a 1.3B language model pretrained on a set of 3T English and French tokens, to bring to the research and industrial community a high-performance, fully open-sourced bilingual model that runs swiftly on consumer-grade local hardware.
33
+ To that end, we pioneer the approach of training an intrinsically bilingual model with a 1:1 English-to-French pretraining data ratio, a custom tokenizer, and bilingual finetuning datasets. We release the training dataset, notably containing a French split with manually curated, high-quality, and varied data sources.
34
+ To assess performance outside of English, we craft a novel benchmark, FrenchBench, consisting of an array of classification and generation tasks, covering various orthogonal aspects of model performance in the French Language. Additionally, rooted in transparency and to foster further Large Language Model research, we release codebases, and dozens of checkpoints across various model sizes, training data distributions, and training steps, as well as fine-tuned Chat models, and strong translation models. We evaluate our model through the FMTI framework, and validate 81% of the transparency criteria, far beyond the scores of even most open initiatives.
35
+ This work enriches the NLP landscape, breaking away from previous English-centric work in order to strengthen our understanding of multilinguality in language models.
36
+
37
+ ## Citation
38
+
39
+ Our work can be cited as:
40
+
41
+ ```bash
42
+ @misc{faysse2024croissantllm,
43
+ title={CroissantLLM: A Truly Bilingual French-English Language Model},
44
+ author={Manuel Faysse and Patrick Fernandes and Nuno Guerreiro and António Loison and Duarte Alves and Caio Corro and Nicolas Boizard and João Alves and Ricardo Rei and Pedro Martins and Antoni Bigata Casademunt and François Yvon and André Martins and Gautier Viaud and Céline Hudelot and Pierre Colombo},
45
+ year={2024},
46
+ eprint={2402.00786},
47
+ archivePrefix={arXiv},
48
+ primaryClass={cs.CL}
49
+ }
50
+ ```
51
+
52
+ ## Usage
53
+
54
+ This model is a base model, that is, it is not finetuned for Chat function and works best with few-shot prompting strategies.
55
+
56
+
57
+ ```python
58
+
59
+ import torch
60
+ from transformers import AutoModelForCausalLM, AutoTokenizer
61
+
62
+
63
+ model_name = "croissantllm/CroissantLLMBase"
64
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
65
+ model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map="auto")
66
+
67
+ inputs = tokenizer("I am so tired I could sleep right now. -> Je suis si fatigué que je pourrais m'endormir maintenant.\nHe is heading to the market. -> Il va au marché.\nWe are running on the beach. ->", return_tensors="pt").to(model.device)
68
+ tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60, temperature=0.3)
69
+ print(tokenizer.decode(tokens[0]))
70
+
71
+ # remove bos token
72
+ inputs = tokenizer("Capitales: France -> Paris, Italie -> Rome, Allemagne -> Berlin, Espagne ->", return_tensors="pt", add_special_tokens=True).to(model.device)
73
+ tokens = model.generate(**inputs, max_length=100, do_sample=True, top_p=0.95, top_k=60)
74
+ print(tokenizer.decode(tokens[0]))
75
+ ```
76
+
77
+ ***
78
+
79
+ Quantization of Model [croissantllm/CroissantLLMBase](https://huggingface.co/croissantllm/CroissantLLMBase). Created using [llm-quantizer](https://github.com/Nold360/llm-quantizer) Pipeline [8668cbd2081063e33a128251312e6de9744d0a64]