Arashasg commited on
Commit
9eacb85
1 Parent(s): c94131c

Create README.md

Browse files

# WikiBert2WikiBert
Bert language models can be employed for Summarization tasks. WikiBert2WikiBert is an encoder-decoder transformer model that is initialized using the Persian WikiBert Model weights. The WikiBert Model is a Bert language model which is fine-tuned on Persian Wikipedia. After using the WikiBert weights for initialization, the model is trained for five epochs on PN-summary and Persian BBC datasets.

## How to Use:
You can use the code below to get the model's outputs, or you can simply use the demo on the right.
```
from transformers import (
BertTokenizerFast,
EncoderDecoderConfig,
EncoderDecoderModel,
BertConfig
)

model_name = 'Arashasg/WikiBert2WikiBert'
tokenizer = BertTokenizerFast.from_pretrained(model_name)
config = EncoderDecoderConfig.from_pretrained(model_name)
model = EncoderDecoderModel.from_pretrained(model_name, config=config)


def generate_summary(text):
inputs = tokenizer(text, padding="max_length", truncation=True, max_length=512, return_tensors="pt")
input_ids = inputs.input_ids.to("cuda")
attention_mask = inputs.attention_mask.to("cuda")

outputs = model.generate(input_ids, attention_mask=attention_mask)

output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True)


return output_str

input = 'your input comes here'
summary = generate_summary(input)
```

## Evaluation
I separated 5 percent of the pn-summary for evaluation of the model. The rouge scores of the model are as follows:

| Rouge-1 | Rouge-2 | Rouge-l |
| ------------- | ------------- | ------------- |
| 38.97% | 18.42% | 34.50% |

---
language:
- fa
tags:
- Wikipedia
- Summarizer
- bert2bert
task_categories:
- summarization
- text generation
task_ids:
- news-articles-summarization
license:
- apache-2.0
multilinguality:
- monolingual
datasets:
- pn-summary
- XL-Sum
metrics:
- rouge-1
- rouge-2
- rouge-l
---

Files changed (1) hide show
  1. README.md +59 -0
README.md CHANGED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # WikiBert2WikiBert
2
+ Bert language models can be employed for Summarization tasks. WikiBert2WikiBert is an encoder-decoder transformer model that is initialized using the Persian WikiBert Model weights. The WikiBert Model is a Bert language model which is fine-tuned on Persian Wikipedia. After using the WikiBert weights for initialization, the model is trained for five epochs on PN-summary and Persian BBC datasets.
3
+
4
+ ## How to Use:
5
+ You can use the code below to get the model's outputs, or you can simply use the demo on the right.
6
+ ```
7
+ from transformers import (
8
+ BertTokenizerFast,
9
+ EncoderDecoderConfig,
10
+ EncoderDecoderModel,
11
+ BertConfig
12
+ )
13
+
14
+ model_name = 'Arashasg/WikiBert2WikiBert'
15
+ tokenizer = BertTokenizerFast.from_pretrained(model_name)
16
+ config = EncoderDecoderConfig.from_pretrained(model_name)
17
+ model = EncoderDecoderModel.from_pretrained(model_name, config=config)
18
+
19
+
20
+ def generate_summary(text):
21
+ inputs = tokenizer(text, padding="max_length", truncation=True, max_length=512, return_tensors="pt")
22
+ input_ids = inputs.input_ids.to("cuda")
23
+ attention_mask = inputs.attention_mask.to("cuda")
24
+
25
+ outputs = model.generate(input_ids, attention_mask=attention_mask)
26
+
27
+ output_str = tokenizer.batch_decode(outputs, skip_special_tokens=True)
28
+
29
+
30
+ return output_str
31
+
32
+ input = 'your input comes here'
33
+ summary = generate_summary(input)
34
+ ```
35
+
36
+ ---
37
+ language:
38
+ - fa
39
+ tags:
40
+ - Wikipedia
41
+ - Summarizer
42
+ - bert2bert
43
+ task_categories:
44
+ - summarization
45
+ - text generation
46
+ task_ids:
47
+ - news-articles-summarization
48
+ license:
49
+ - apache-2.0
50
+ multilinguality:
51
+ - monolingual
52
+ datasets:
53
+ - pn-summary
54
+ - XL-Sum
55
+ metrics:
56
+ - rouge-1
57
+ - rouge-2
58
+ - rouge-l
59
+ ---