mrm8488 commited on
Commit
86cb911
1 Parent(s): ceaa8ad

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -0
README.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: wtfpl
3
+ language: es
4
+ tags:
5
+ - gpt-j
6
+ - spanish
7
+ - gpt-j-6b
8
+ ---
9
+
10
+ # BERTIN-GPT-J-6B with 8-bit weights (Quantized)
11
+
12
+ This model (and model card) is an adaptation of [hivemind/gpt-j-6B-8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit), so all credits to him/her.
13
+
14
+ This is a version of the **latest checkpoint (1M steps)** **bertin-project/bertin-gpt-j-6B** that is modified so you can generate **and fine-tune the model in Colab or equivalent desktop GPU (e.g. single 1080Ti)**.
15
+
16
+ Here's how to run it: [![colab](https://camo.githubusercontent.com/84f0493939e0c4de4e6dbe113251b4bfb5353e57134ffd9fcab6b8714514d4d1/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667)](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es)
17
+
18
+ __The [original GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it [on TPU](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) or CPUs, but fine-tuning is way more expensive.
19
+
20
+ Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory:
21
+ - large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication
22
+ - using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training
23
+ - scalable fine-tuning with [LoRA](https://arxiv.org/abs/2106.09685) and [8-bit Adam](https://arxiv.org/abs/2110.02861)
24
+
25
+ In other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases).
26
+
27
+ ![img](https://i.imgur.com/n4XXo1x.png)
28
+
29
+
30
+ __Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. [This notebook measures wikitext test perplexity](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/check_perplexity.ipynb) and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant.
31
+
32
+ Our code differs from other 8-bit methods in that we use **8-bit only for storage, and all computations are performed in float16 or float32**. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error.
33
+
34
+
35
+ __What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU.
36
+
37
+
38
+ ### How should I fine-tune the model?
39
+
40
+ We recommend starting with the original hyperparameters from [the LoRA paper](https://arxiv.org/pdf/2106.09685.pdf).
41
+ On top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size.
42
+ As a result, the larger batch size you can fit, the more efficient you will train.
43
+
44
+
45
+ ### Where can I train for free?
46
+
47
+ You can train fine in colab, but if you get a K80, it's probably best to switch to other free gpu providers: [kaggle](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a), [aws sagemaker](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a) or [paperspace](https://docs.paperspace.com/gradient/more/instance-types/free-instances). For intance, this is the same notebook [running in kaggle](https://www.kaggle.com/justheuristic/dmazur-converted) using a more powerful P100 instance.
48
+
49
+
50
+ ### Can I use this technique with other models?
51
+
52
+ The model was converted using [this notebook](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb). It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters.
53
+
54
+ ### How to use
55
+
56
+ ```sh
57
+ wget https://huggingface.co/mrm8488/bertin-gpt-j-6B-ES-8bit/resolve/main/utils.py -O Utils.py
58
+ pip install transformers
59
+ pip install bitsandbytes-cuda111==0.26.0
60
+ ```
61
+
62
+ ```py
63
+ import transformers
64
+ import torch
65
+
66
+ from Utils import GPTJBlock, GPTJForCausalLM
67
+
68
+ device = 'cuda' if torch.cuda.is_available() else 'cpu'
69
+
70
+ transformers.models.gptj.modeling_gptj.GPTJBlock = GPTJBlock # monkey-patch GPT-J
71
+
72
+ tokenizer = transformers.AutoTokenizer.from_pretrained("mrm8488/bertin-gpt-j-6B-ES-8bit")
73
+ model = GPTJForCausalLM.from_pretrained("hivemind/gpt-j-6B-8bit", pad_token_id=tokenizer.eos_token_id, low_cpu_mem_usage=True).to(device)
74
+
75
+
76
+ prompt = tokenizer("El sentido de la vida es", return_tensors='pt')
77
+ prompt = {key: value.to(device) for key, value in prompt.items()}
78
+
79
+ out = model.generate(**prompt, max_length=64, do_sample=True)
80
+
81
+ print(tokenizer.decode(out[0]))
82
+ ```