add details on api
Browse files
README.md
CHANGED
@@ -74,13 +74,15 @@ inference:
|
|
74 |
- all the parameters for generation on the API are the same as [the base model](https://huggingface.co/pszemraj/led-base-book-summary) for easy comparison between versions.
|
75 |
- works well on lots of text, can hand 16384 tokens/batch.
|
76 |
|
|
|
77 |
---
|
78 |
|
79 |
# Usage - Basics
|
80 |
|
81 |
- it is recommended to use `encoder_no_repeat_ngram_size=3` when calling the pipeline object to improve summary quality.
|
82 |
-
- this param forces the model to use new vocabulary and create an abstractive summary, otherwise it may
|
83 |
- create the pipeline object:
|
|
|
84 |
|
85 |
```
|
86 |
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|
|
|
74 |
- all the parameters for generation on the API are the same as [the base model](https://huggingface.co/pszemraj/led-base-book-summary) for easy comparison between versions.
|
75 |
- works well on lots of text, can hand 16384 tokens/batch.
|
76 |
|
77 |
+
|
78 |
---
|
79 |
|
80 |
# Usage - Basics
|
81 |
|
82 |
- it is recommended to use `encoder_no_repeat_ngram_size=3` when calling the pipeline object to improve summary quality.
|
83 |
+
- this param forces the model to use new vocabulary and create an abstractive summary, otherwise it may compile the best _extractive_ summary from the input provided.
|
84 |
- create the pipeline object:
|
85 |
+
> Note: the API is set to generate a max of 64 tokens for runtime reasons, so the summaries may be truncated (depending on length of input text). For best results use python as below.
|
86 |
|
87 |
```
|
88 |
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
|