pszemraj commited on
Commit
2082c37
·
1 Parent(s): c144381

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -1
README.md CHANGED
@@ -1,3 +1,55 @@
1
  ---
2
- license: bsd-3-clause
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license:
3
+ - apache-2.0
4
+ - bsd-3-clause
5
+ tags:
6
+ - summarization
7
+ - summary
8
+ - booksum
9
+ - long-document
10
+ - long-form
11
+ - tglobal-xl
12
+ - XL
13
+ datasets:
14
+ - kmfoda/booksum
15
+ metrics:
16
+ - rouge
17
+ inference: false
18
  ---
19
+
20
+
21
+ # long-t5-tglobal-xl-8b: 8-bit quantized version
22
+
23
+ This is an 8-bit quantized version of the `pszemraj/long-t5-tglobal-xl-16384-book-summary` model, called `long-t5-tglobal-xl-8b`. The model has been compressed using `bitsandbytes` and can be loaded with low memory usage.
24
+
25
+ Refer to the [original model](https://huggingface.co/pszemraj/long-t5-tglobal-xl-16384-book-summary) for all details about the model architecture and training process. For more information on loading 8-bit models, refer to the `4.28.0` [release information](https://github.com/huggingface/transformers/releases/tag/v4.28.0) and the [example repository](https://huggingface.co/ybelkada/bloom-1b7-8bit).
26
+
27
+ - The total size of the model is only ~3.5 GB, much smaller than the original size.
28
+ - This allows for low-RAM loading, making it easier to use in memory-limited environments.
29
+
30
+ ## Basic Usage
31
+
32
+ To use the model, install or upgrade `transformers`, `accelerate`, and `bitsandbytes`. Make sure to have `transformers>=4.28.0` and `bitsandbytes>0.37.2`.
33
+
34
+ ```bash
35
+ pip install -U -q transformers bitsandbytes accelerate
36
+ ```
37
+
38
+ Load the model with `AutoTokenizer` and `AutoModelForSeq2SeqLM`:
39
+
40
+ ```python
41
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
42
+
43
+ model_name = "pszemraj/long-t5-tglobal-xl-16384-book-summary-8bit"
44
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
45
+
46
+ model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
47
+ ```
48
+
49
+ ## More information about long-t5-tglobal-xl-16384-book-summary
50
+
51
+ - This is an 8-bit quantized version of `pszemraj/long-t5-tglobal-xl-16384-book-summary`.
52
+ - It generalizes reasonably well to academic and narrative text, producing high-quality summaries.
53
+ - The XL checkpoint is used, resulting in even better summaries from a human evaluation perspective.
54
+ - A simple example/use case with the base model on ASR can be found [here](https://huggingface.co/pszemraj/long-t5-tglobal-xl-16384-book-summary/tree/main/examples/asr).
55
+ - A proof-of-concept example using the infamous Navy Seals copypasta demonstrates the model's ability to generate summaries from even short text inputs.