arnavgrg commited on
Commit
47352e2
1 Parent(s): 67c5365

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -0
README.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - text-generation-inference
5
+ ---
6
+
7
+ This is an upscaled fp16 variant of the original CodeLlama-70b-instruct base model by Meta after it has been loaded with nf4 4-bit quantization via bitsandbytes.
8
+ The main idea here is to upscale the linear4bit layers to fp16 so that the quantization/dequantization cost doesn't have to paid for each forward pass at inference time.
9
+
10
+ _Note: The quantization operation to nf4 is not lossless, so the model weights for the linear layers are lossy, which means that this model will not work as well as the official base model._
11
+
12
+ To use this model, you can just load it via `transformers` in fp16:
13
+
14
+ ```python
15
+ import torch
16
+ from transformers import AutoModelForCausalLM
17
+
18
+ model = AutoModelForCausalLM.from_pretrained(
19
+ "arnavgrg/codellama-70b-instruct-nf4-fp16-upscaled",
20
+ device_map="auto",
21
+ torch_dtype=torch.float16,
22
+ )
23
+ ```