File size: 2,588 Bytes
661a27c
2082c37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
661a27c
2082c37
 
abf7fc9
2082c37
bedceda
 
 
 
95fe631
2082c37
 
 
 
 
95fe631
 
2082c37
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
license:
  - apache-2.0
  - bsd-3-clause
tags:
  - summarization
  - summary
  - booksum
  - long-document
  - long-form
  - tglobal-xl
  - XL
datasets:
  - kmfoda/booksum
metrics:
  - rouge
inference: false
---


# long-t5-tglobal-xl-16384-book-summary: the 8-bit quantized version

<a href="https://colab.research.google.com/gist/pszemraj/c19e32baf876deb866c31cd46c86e893/long-t5-xl-accelerate-test.ipynb">
  <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
</a>

This is an 8-bit quantized version of the `pszemraj/long-t5-tglobal-xl-16384-book-summary` model, The model has been compressed using `bitsandbytes` and can be loaded with low memory usage.

Refer to the [original model](https://huggingface.co/pszemraj/long-t5-tglobal-xl-16384-book-summary) for all details about the model architecture and training process. For more information on loading 8-bit models, refer to the `4.28.0` [release information](https://github.com/huggingface/transformers/releases/tag/v4.28.0) and the [example repository](https://huggingface.co/ybelkada/bloom-1b7-8bit).

- The total size of the model is only ~3.5 GB, much smaller than the original size.
- This allows for low-RAM loading, making it easier to use in memory-limited environments.
- `bitsandbytes` - AFAIK at time of writing - only works on GPU


## Basic Usage

To use the model, install or upgrade `transformers`, `accelerate`, and `bitsandbytes`. Make sure to have `transformers>=4.28.0` and `bitsandbytes>0.37.2`.

```bash
pip install -U -q transformers bitsandbytes accelerate
```

Load the model with `AutoTokenizer` and `AutoModelForSeq2SeqLM`:

```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

model_name = "pszemraj/long-t5-tglobal-xl-16384-book-summary-8bit"
tokenizer = AutoTokenizer.from_pretrained(model_name)

model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
```

## More information about long-t5-tglobal-xl-16384-book-summary

- This is an 8-bit quantized version of `pszemraj/long-t5-tglobal-xl-16384-book-summary`.
- It generalizes reasonably well to academic and narrative text, producing high-quality summaries.
- The XL checkpoint is used, resulting in even better summaries from a human evaluation perspective.
- A simple example/use case with the base model on ASR can be found [here](https://huggingface.co/pszemraj/long-t5-tglobal-xl-16384-book-summary/tree/main/examples/asr).
- A proof-of-concept example using the infamous Navy Seals copypasta demonstrates the model's ability to generate summaries from even short text inputs.