File size: 2,262 Bytes
661a27c 2082c37 b3a0b3f 2082c37 b3a0b3f 2082c37 b3a0b3f 2082c37 b3a0b3f 2082c37 b3a0b3f 661a27c 2082c37 b3a0b3f 2082c37 bedceda 95fe631 2082c37 b3a0b3f 95fe631 2082c37 73b42f7 b3a0b3f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 |
---
license:
- apache-2.0
- bsd-3-clause
tags:
- summarization
- summary
- booksum
- long-document
- long-form
- tglobal-xl
- XL
- 8bit
- quantized
datasets:
- kmfoda/booksum
metrics:
- rouge
inference: false
pipeline_tag: summarization
---
# long-t5-tglobal-xl-16384-book-summary: 8-bit quantized version
<a href="https://colab.research.google.com/gist/pszemraj/c19e32baf876deb866c31cd46c86e893/long-t5-xl-accelerate-test.ipynb">
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>
This is an 8-bit quantized version of the `pszemraj/long-t5-tglobal-xl-16384-book-summary` model, The model has been compressed using `bitsandbytes` and can be loaded with low memory usage.
Refer to the [original model](https://huggingface.co/pszemraj/long-t5-tglobal-xl-16384-book-summary) for all details about the model architecture and training process. For more information on loading 8-bit models, refer to the `4.28.0` [release information](https://github.com/huggingface/transformers/releases/tag/v4.28.0) and the [example repository](https://huggingface.co/ybelkada/bloom-1b7-8bit).
- The total size of the model is only ~3.5 GB (vs original 12 GB)
- Enables low-RAM loading, making it easier to use in memory-limited environments like Colab
- Requires `bitsandbytes` - AFAIK at time of writing, only works on GPU
## Basic Usage
To use the model, install or upgrade `transformers`, `accelerate`, and `bitsandbytes`. Make sure to have `transformers>=4.28.0` and `bitsandbytes>0.37.2`.
```bash
pip install -U -q transformers bitsandbytes accelerate
```
Load the model with `AutoTokenizer` and `AutoModelForSeq2SeqLM`:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
model_name = "pszemraj/long-t5-tglobal-xl-16384-book-summary-8bit"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
```
## More information about long-t5-tglobal-xl-16384-book-summary
- This is an 8-bit quantized version of `pszemraj/long-t5-tglobal-xl-16384-book-summary`.
- It generalizes reasonably well to academic and narrative text.
- The XL checkpoint typically generates summaries that are considerably better from a human evaluation perspective. |