peterszemraj@gmail.com
Browse files
README.md
CHANGED
@@ -68,6 +68,7 @@ inference:
|
|
68 |
# Longformer Encoder-Decoder (LED) fine-tuned on Booksum
|
69 |
|
70 |
- an 'upgraded' version of [`pszemraj/led-base-16384-finetuned-booksum`](https://huggingface.co/pszemraj/led-base-16384-finetuned-booksum), it was trained for an additional epoch with a max summary length of 1024 tokens (original was trained with 512) as a small portion of the summaries are between 512-1024 tokens long.
|
|
|
71 |
- works well on lots of text, can hand 16384 tokens/batch.
|
72 |
|
73 |
## Other Checkpoints on Booksum
|
@@ -86,7 +87,7 @@ inference:
|
|
86 |
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
87 |
from transformers import pipeline
|
88 |
|
89 |
-
hf_name = 'pszemraj/led-base-
|
90 |
|
91 |
_model = AutoModelForSeq2SeqLM.from_pretrained(
|
92 |
hf_name,
|
@@ -132,6 +133,5 @@ result = summarizer(
|
|
132 |
|
133 |
**no results for this version yet**
|
134 |
|
135 |
-
```
|
136 |
|
137 |
---
|
|
|
68 |
# Longformer Encoder-Decoder (LED) fine-tuned on Booksum
|
69 |
|
70 |
- an 'upgraded' version of [`pszemraj/led-base-16384-finetuned-booksum`](https://huggingface.co/pszemraj/led-base-16384-finetuned-booksum), it was trained for an additional epoch with a max summary length of 1024 tokens (original was trained with 512) as a small portion of the summaries are between 512-1024 tokens long.
|
71 |
+
- all the parameters for generation on the API are the same for easy comparison between versions.
|
72 |
- works well on lots of text, can hand 16384 tokens/batch.
|
73 |
|
74 |
## Other Checkpoints on Booksum
|
|
|
87 |
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
88 |
from transformers import pipeline
|
89 |
|
90 |
+
hf_name = 'pszemraj/led-base-book-summary'
|
91 |
|
92 |
_model = AutoModelForSeq2SeqLM.from_pretrained(
|
93 |
hf_name,
|
|
|
133 |
|
134 |
**no results for this version yet**
|
135 |
|
|
|
136 |
|
137 |
---
|