dirkgr lbourdois commited on
Commit
e1dd552
1 Parent(s): 985f0cb

Upload README.md with huggingface_hub (#4)

Browse files

- Upload README.md with huggingface_hub (76c8f38ca4528555a07633cf5d9e7818d77e98cc)


Co-authored-by: Loïck BOURDOIS <lbourdois@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +28 -26
README.md CHANGED
@@ -1,28 +1,30 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
4
 
5
- HF-version model for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization (ACL 2022).
6
-
7
- The original code can be found [here](https://github.com/allenai/PRIMER). You can find the script and notebook to train/evaluate the model in the original github repo.
8
-
9
- * Note: due to the difference between the implementations of the original Longformer and the Huggingface LED model, the results of converted models are slightly different. We run a sanity check on both fine-tuned and non fine-tuned models on the **Multinews dataset**, and show the results below:
10
-
11
- | Model | Rouge-1 | Rouge-2 | Rouge-L |
12
- | --- | ----------- |----------- |----------- |
13
- | PRIMERA | 42.0 | 13.6 | 20.8|
14
- | PRIMERA-hf | 41.7 |13.6 | 20.5|
15
- | PRIMERA(finetuned) | 49.9 | 21.1 | 25.9|
16
- | PRIMERA-hf(finetuned) | 49.9 | 20.9 | 25.8|
17
-
18
- You can use it by
19
- ```
20
- from transformers import (
21
- AutoTokenizer,
22
- LEDConfig,
23
- LEDForConditionalGeneration,
24
- )
25
- tokenizer = AutoTokenizer.from_pretrained('allenai/PRIMERA')
26
- config=LEDConfig.from_pretrained('allenai/PRIMERA')
27
- model = LEDForConditionalGeneration.from_pretrained('allenai/PRIMERA')
 
28
  ```
 
1
+ ---
2
+ language: en
3
+ license: apache-2.0
4
+ ---
5
 
6
+
7
+ HF-version model for PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization (ACL 2022).
8
+
9
+ The original code can be found [here](https://github.com/allenai/PRIMER). You can find the script and notebook to train/evaluate the model in the original github repo.
10
+
11
+ * Note: due to the difference between the implementations of the original Longformer and the Huggingface LED model, the results of converted models are slightly different. We run a sanity check on both fine-tuned and non fine-tuned models on the **Multinews dataset**, and show the results below:
12
+
13
+ | Model | Rouge-1 | Rouge-2 | Rouge-L |
14
+ | --- | ----------- |----------- |----------- |
15
+ | PRIMERA | 42.0 | 13.6 | 20.8|
16
+ | PRIMERA-hf | 41.7 |13.6 | 20.5|
17
+ | PRIMERA(finetuned) | 49.9 | 21.1 | 25.9|
18
+ | PRIMERA-hf(finetuned) | 49.9 | 20.9 | 25.8|
19
+
20
+ You can use it by
21
+ ```
22
+ from transformers import (
23
+ AutoTokenizer,
24
+ LEDConfig,
25
+ LEDForConditionalGeneration,
26
+ )
27
+ tokenizer = AutoTokenizer.from_pretrained('allenai/PRIMERA')
28
+ config=LEDConfig.from_pretrained('allenai/PRIMERA')
29
+ model = LEDForConditionalGeneration.from_pretrained('allenai/PRIMERA')
30
  ```