gguichard commited on
Commit
56f298c
1 Parent(s): 421f98c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +148 -1
README.md CHANGED
@@ -4,4 +4,151 @@ datasets:
4
  language:
5
  - fr
6
  library_name: transformers
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  language:
5
  - fr
6
  library_name: transformers
7
+ ---
8
+
9
+ # CamemBERT: a Tasty French Language Model
10
+
11
+ ## Table of Contents
12
+ - [Model Details](#model-details)
13
+ - [Uses](#uses)
14
+ - [Risks, Limitations and Biases](#risks-limitations-and-biases)
15
+ - [Training](#training)
16
+ - [Evaluation](#evaluation)
17
+ - [Citation Information](#citation-information)
18
+ - [How to Get Started With the Model](#how-to-get-started-with-the-model)
19
+
20
+ - ## Model Details
21
+ - **Model Description:**
22
+ This model is a state-of-the-art language model for French coreference resolution.
23
+ - **Developed by:** Grégory Guichard
24
+ - **Model Type:** Token Classification
25
+ - **Language(s):** French
26
+ - **License:** MIT
27
+ - **Parent Model:** See the [Camembert-large model](https://huggingface.co/camembert/camembert-large) for more information about the RoBERTa base model.
28
+ - **Resources for more information:**
29
+
30
+
31
+ ## Uses
32
+
33
+ #### Direct Use
34
+
35
+ This model can be used for Token Classification tasks.
36
+
37
+
38
+ ## Risks, Limitations and Biases
39
+ **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.**
40
+
41
+ Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)).
42
+
43
+ This model was pretrained on a subcorpus of OSCAR multilingual corpus. Some of the limitations and risks associated with the OSCAR dataset, which are further detailed in the [OSCAR dataset card](https://huggingface.co/datasets/oscar), include the following:
44
+
45
+ > The quality of some OSCAR sub-corpora might be lower than expected, specifically for the lowest-resource languages.
46
+
47
+ > Constructed from Common Crawl, Personal and sensitive information might be present.
48
+
49
+
50
+
51
+ ## Training
52
+
53
+
54
+ #### Training Data
55
+ OSCAR or Open Super-large Crawled Aggregated coRpus is a multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the Ungoliant architecture.
56
+
57
+
58
+ #### Training Procedure
59
+
60
+ | Model | #params | Arch. | Training data |
61
+ |--------------------------------|--------------------------------|-------|-----------------------------------|
62
+ | `camembert-base` | 110M | Base | OSCAR (138 GB of text) |
63
+ | `camembert/camembert-large` | 335M | Large | CCNet (135 GB of text) |
64
+ | `camembert/camembert-base-ccnet` | 110M | Base | CCNet (135 GB of text) |
65
+ | `camembert/camembert-base-wikipedia-4gb` | 110M | Base | Wikipedia (4 GB of text) |
66
+ | `camembert/camembert-base-oscar-4gb` | 110M | Base | Subsample of OSCAR (4 GB of text) |
67
+ | `camembert/camembert-base-ccnet-4gb` | 110M | Base | Subsample of CCNet (4 GB of text) |
68
+
69
+ ## Evaluation
70
+
71
+
72
+ The model developers evaluated CamemBERT using four different downstream tasks for French: part-of-speech (POS) tagging, dependency parsing, named entity recognition (NER) and natural language inference (NLI).
73
+
74
+
75
+
76
+ ## Citation Information
77
+
78
+ ```bibtex
79
+ @inproceedings{martin2020camembert,
80
+ title={CamemBERT: a Tasty French Language Model},
81
+ author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
82
+ booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
83
+ year={2020}
84
+ }
85
+ ```
86
+
87
+ ## How to Get Started With the Model
88
+
89
+ ##### Load CamemBERT and its sub-word tokenizer :
90
+ ```python
91
+ from transformers import CamembertModel, CamembertTokenizer
92
+
93
+ # You can replace "camembert-base" with any other model from the table, e.g. "camembert/camembert-large".
94
+ tokenizer = CamembertTokenizer.from_pretrained("camembert-base")
95
+ camembert = CamembertModel.from_pretrained("camembert-base")
96
+
97
+ camembert.eval() # disable dropout (or leave in train mode to finetune)
98
+
99
+ ```
100
+
101
+ ##### Filling masks using pipeline
102
+ ```python
103
+ from transformers import pipeline
104
+
105
+ camembert_fill_mask = pipeline("fill-mask", model="camembert-base", tokenizer="camembert-base")
106
+ results = camembert_fill_mask("Le camembert est <mask> :)")
107
+ # results
108
+ #[{'sequence': '<s> Le camembert est délicieux :)</s>', 'score': 0.4909103214740753, 'token': 7200},
109
+ # {'sequence': '<s> Le camembert est excellent :)</s>', 'score': 0.10556930303573608, 'token': 2183},
110
+ # {'sequence': '<s> Le camembert est succulent :)</s>', 'score': 0.03453315049409866, 'token': 26202},
111
+ # {'sequence': '<s> Le camembert est meilleur :)</s>', 'score': 0.03303130343556404, 'token': 528},
112
+ # {'sequence': '<s> Le camembert est parfait :)</s>', 'score': 0.030076518654823303, 'token': 1654}]
113
+
114
+ ```
115
+
116
+ ##### Extract contextual embedding features from Camembert output
117
+ ```python
118
+ import torch
119
+ # Tokenize in sub-words with SentencePiece
120
+ tokenized_sentence = tokenizer.tokenize("J'aime le camembert !")
121
+ # ['▁J', "'", 'aime', '▁le', '▁ca', 'member', 't', '▁!']
122
+
123
+ # 1-hot encode and add special starting and end tokens
124
+ encoded_sentence = tokenizer.encode(tokenized_sentence)
125
+ # [5, 121, 11, 660, 16, 730, 25543, 110, 83, 6]
126
+ # NB: Can be done in one step : tokenize.encode("J'aime le camembert !")
127
+
128
+ # Feed tokens to Camembert as a torch tensor (batch dim 1)
129
+ encoded_sentence = torch.tensor(encoded_sentence).unsqueeze(0)
130
+ embeddings, _ = camembert(encoded_sentence)
131
+ # embeddings.detach()
132
+ # embeddings.size torch.Size([1, 10, 768])
133
+ # tensor([[[-0.0254, 0.0235, 0.1027, ..., -0.1459, -0.0205, -0.0116],
134
+ # [ 0.0606, -0.1811, -0.0418, ..., -0.1815, 0.0880, -0.0766],
135
+ # [-0.1561, -0.1127, 0.2687, ..., -0.0648, 0.0249, 0.0446],
136
+ # ...,
137
+ ```
138
+
139
+ ##### Extract contextual embedding features from all Camembert layers
140
+ ```python
141
+ from transformers import CamembertConfig
142
+ # (Need to reload the model with new config)
143
+ config = CamembertConfig.from_pretrained("camembert-base", output_hidden_states=True)
144
+ camembert = CamembertModel.from_pretrained("camembert-base", config=config)
145
+
146
+ embeddings, _, all_layer_embeddings = camembert(encoded_sentence)
147
+ # all_layer_embeddings list of len(all_layer_embeddings) == 13 (input embedding layer + 12 self attention layers)
148
+ all_layer_embeddings[5]
149
+ # layer 5 contextual embedding : size torch.Size([1, 10, 768])
150
+ #tensor([[[-0.0032, 0.0075, 0.0040, ..., -0.0025, -0.0178, -0.0210],
151
+ # [-0.0996, -0.1474, 0.1057, ..., -0.0278, 0.1690, -0.2982],
152
+ # [ 0.0557, -0.0588, 0.0547, ..., -0.0726, -0.0867, 0.0699],
153
+ # ...,
154
+ ```