Update README.md
Browse files
README.md
CHANGED
@@ -19,7 +19,7 @@ pipeline_tag: summarization
|
|
19 |
|
20 |
![Bart Logo](https://huggingface.co/front/assets/huggingface_logo.svg)
|
21 |
|
22 |
-
This repository contains the **Bart-Large-paper2slides-summarizer Model**, which has been fine-tuned on the [Automatic Slide Generation from Scientific Papers dataset](https://www.kaggle.com/datasets/andrewmvd/automatic-slide-generation-from-scientific-papers) using unsupervised learning techniques using an algorithm from the paper entitled '
|
23 |
Its primary focus is to summarize **scientific texts** with precision and accuracy, the model is parallelly trained with another model from the same contributor,
|
24 |
|
25 |
## Model Details
|
@@ -40,18 +40,23 @@ To use this model, you can leverage the Hugging Face [Transformers](https://hugg
|
|
40 |
from transformers import BartTokenizer, BartForConditionalGeneration
|
41 |
|
42 |
# Load the model and tokenizer
|
43 |
-
model_name = "
|
44 |
tokenizer = BartTokenizer.from_pretrained(model_name)
|
45 |
model = BartForConditionalGeneration.from_pretrained(model_name)
|
46 |
|
47 |
-
# Generate
|
48 |
input_text = "Your input text here..."
|
49 |
input_ids = tokenizer.encode(input_text, return_tensors="pt")
|
50 |
output = model.generate(input_ids)
|
51 |
|
52 |
-
# Decode generated
|
53 |
-
|
54 |
-
print(
|
|
|
|
|
|
|
|
|
|
|
55 |
```
|
56 |
|
57 |
Ensure you have the `transformers` library installed before running the code. You can install it using `pip`:
|
|
|
19 |
|
20 |
![Bart Logo](https://huggingface.co/front/assets/huggingface_logo.svg)
|
21 |
|
22 |
+
This repository contains the **Bart-Large-paper2slides-summarizer Model**, which has been fine-tuned on the [Automatic Slide Generation from Scientific Papers dataset](https://www.kaggle.com/datasets/andrewmvd/automatic-slide-generation-from-scientific-papers) using unsupervised learning techniques using an algorithm from the paper entitled '[Unsupervised Machine Translation Using Monolingual Corpora Only](https://arxiv.org/abs/1711.00043)'.
|
23 |
Its primary focus is to summarize **scientific texts** with precision and accuracy, the model is parallelly trained with another model from the same contributor,
|
24 |
|
25 |
## Model Details
|
|
|
40 |
from transformers import BartTokenizer, BartForConditionalGeneration
|
41 |
|
42 |
# Load the model and tokenizer
|
43 |
+
model_name = "com3dian/Bart-large-paper2slides-summarizer"
|
44 |
tokenizer = BartTokenizer.from_pretrained(model_name)
|
45 |
model = BartForConditionalGeneration.from_pretrained(model_name)
|
46 |
|
47 |
+
# Generate summary from input text
|
48 |
input_text = "Your input text here..."
|
49 |
input_ids = tokenizer.encode(input_text, return_tensors="pt")
|
50 |
output = model.generate(input_ids)
|
51 |
|
52 |
+
# Decode generated summaries
|
53 |
+
summary = tokenizer.decode(output[0], skip_special_tokens=True)
|
54 |
+
print(summary)
|
55 |
+
|
56 |
+
# Or using the pipeline API
|
57 |
+
summarizer = pipeline("summarization", model=model_name)
|
58 |
+
summary = summarizer(text, max_length=50, min_length=30, do_sample=False)
|
59 |
+
print(summary)
|
60 |
```
|
61 |
|
62 |
Ensure you have the `transformers` library installed before running the code. You can install it using `pip`:
|