File size: 2,132 Bytes
ac84598
 
573171d
 
 
 
 
 
ac84598
 
 
 
 
 
 
 
 
 
 
 
 
 
573171d
ac84598
573171d
 
 
 
ac84598
 
573171d
ac84598
 
 
573171d
ac84598
 
 
 
 
573171d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
---
library_name: transformers
datasets:
- ccdv/cnn_dailymail
language:
- en
base_model:
- google-bert/bert-base-uncased
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->



## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->

This is the model is used for making or generating summary of the provided paragraph.

- **Developed by:** BEASTBOYJAY
- **Model type:** Transformer(encoder)
- **Language(s) (NLP):** English
- **Finetuned from model:** Bert-base-uncased
## Uses

- For the summarization purpose only

## Bias, Risks, and Limitations

This model is fine-tuned on very small dataset can need more fine-tuning for better results.(Fine-tuned this model only for eductional purposes)

## How to Get Started with the Model

Use the code below to get started with the model.

```
from transformers import EncoderDecoderModel, BertTokenizer

class TextSummarizer:
    def __init__(self, model_path, tokenizer_name="bert-base-uncased"):
        self.tokenizer = BertTokenizer.from_pretrained(tokenizer_name)
        self.model = EncoderDecoderModel.from_pretrained(model_path)

    def summarize(self, text, max_input_length=512):
        inputs = self.tokenizer(
            text,
            return_tensors="pt",
            truncation=True,
            padding="max_length",
            max_length=max_input_length,
        )
        summary_ids = self.model.generate(
            inputs["input_ids"],
            attention_mask=inputs["attention_mask"],
            decoder_start_token_id=self.tokenizer.cls_token_id,
            max_length=128,
            num_beams=4,
            length_penalty=1.5,
            no_repeat_ngram_size=1,
            early_stopping=True,
        )
        summary = self.tokenizer.decode(summary_ids[0], skip_special_tokens=True)
        return summary


if __name__ == "__main__":
    summarizer = TextSummarizer(model_path="BEASTBOYJAY/my-fine-tuned-summarizer")
    test_article = "Your article or paragraph"
    summary = summarizer.summarize(test_article)
    print("Generated Summary:", summary)
```