Text
stringclasses
1 value
Label
stringclasses
1 value
__index_level_0__
stringclasses
3 values
__index_level_1__
stringclasses
3 values
__index_level_2__
stringclasses
3 values
__index_level_3__
stringclasses
3 values
__index_level_4__
stringclasses
2 values
__index_level_5__
stringclasses
2 values
__index_level_6__
stringclasses
2 values
__index_level_7__
stringclasses
2 values
__index_level_8__
stringclasses
2 values
__index_level_9__
stringclasses
2 values
__index_level_10__
stringclasses
2 values
__index_level_11__
stringclasses
2 values
__index_level_12__
stringclasses
2 values
__index_level_13__
stringclasses
2 values
__index_level_14__
stringclasses
2 values
__index_level_15__
stringclasses
2 values
__index_level_16__
stringclasses
2 values
__index_level_17__
stringclasses
2 values
__index_level_18__
stringclasses
2 values
__index_level_19__
stringclasses
2 values
__index_level_20__
stringclasses
2 values
__index_level_21__
stringclasses
2 values
__index_level_22__
stringclasses
2 values
__index_level_23__
stringclasses
2 values
__index_level_24__
stringclasses
2 values
__index_level_25__
stringclasses
2 values
__index_level_26__
stringclasses
2 values
__index_level_27__
stringclasses
2 values
__index_level_28__
stringclasses
2 values
__index_level_29__
stringclasses
2 values
__index_level_30__
stringclasses
2 values
__index_level_31__
stringclasses
2 values
__index_level_32__
stringclasses
2 values
__index_level_33__
stringclasses
2 values
__index_level_34__
stringclasses
2 values
__index_level_35__
stringclasses
2 values
__index_level_36__
stringclasses
2 values
__index_level_37__
stringclasses
2 values
__index_level_38__
stringclasses
2 values
__index_level_39__
stringclasses
2 values
__index_level_40__
stringclasses
2 values
__index_level_41__
stringclasses
2 values
__index_level_42__
stringclasses
2 values
__index_level_43__
stringclasses
2 values
__index_level_44__
stringclasses
2 values
__index_level_45__
stringclasses
2 values
__index_level_46__
stringclasses
2 values
__index_level_47__
stringclasses
2 values
__index_level_48__
stringclasses
2 values
__index_level_49__
stringclasses
2 values
__index_level_50__
stringclasses
2 values
__index_level_51__
stringclasses
2 values
__index_level_52__
stringclasses
2 values
__index_level_53__
stringclasses
2 values
__index_level_54__
stringclasses
2 values
__index_level_55__
stringclasses
2 values
__index_level_56__
stringclasses
2 values
__index_level_57__
stringclasses
2 values
__index_level_58__
stringclasses
2 values
__index_level_59__
stringclasses
2 values
__index_level_60__
stringclasses
2 values
__index_level_61__
stringclasses
2 values
__index_level_62__
stringclasses
2 values
__index_level_63__
stringclasses
2 values
__index_level_64__
stringclasses
2 values
__index_level_65__
stringclasses
2 values
__index_level_66__
stringclasses
2 values
__index_level_67__
stringclasses
2 values
__index_level_68__
stringclasses
2 values
__index_level_69__
stringclasses
2 values
__index_level_70__
stringclasses
2 values
__index_level_71__
stringclasses
2 values
__index_level_72__
stringclasses
2 values
__index_level_73__
stringclasses
2 values
__index_level_74__
stringclasses
2 values
__index_level_75__
stringclasses
2 values
__index_level_76__
stringclasses
2 values
__index_level_77__
stringclasses
2 values
__index_level_78__
stringclasses
2 values
__index_level_79__
stringclasses
2 values
__index_level_80__
stringclasses
2 values
__index_level_81__
stringclasses
2 values
__index_level_82__
stringclasses
2 values
__index_level_83__
stringclasses
2 values
__index_level_84__
stringclasses
2 values
__index_level_85__
stringclasses
2 values
__index_level_86__
stringclasses
2 values
__index_level_87__
stringclasses
2 values
__index_level_88__
stringclasses
2 values
__index_level_89__
stringclasses
2 values
__index_level_90__
stringclasses
2 values
__index_level_91__
stringclasses
2 values
__index_level_92__
stringclasses
2 values
__index_level_93__
stringclasses
1 value
__index_level_94__
stringclasses
1 value
__index_level_95__
stringclasses
1 value
__index_level_96__
stringclasses
1 value
__index_level_97__
stringclasses
1 value
__index_level_98__
stringclasses
1 value
__index_level_99__
stringclasses
1 value
__index_level_100__
stringclasses
1 value
__index_level_101__
stringclasses
1 value
__index_level_102__
stringclasses
1 value
__index_level_103__
stringclasses
1 value
__index_level_104__
stringclasses
1 value
__index_level_105__
stringclasses
1 value
__index_level_106__
stringclasses
1 value
__index_level_107__
stringclasses
1 value
__index_level_108__
stringclasses
1 value
__index_level_109__
stringclasses
1 value
__index_level_110__
stringclasses
1 value
__index_level_111__
stringclasses
1 value
__index_level_112__
stringclasses
1 value
__index_level_113__
stringclasses
1 value
__index_level_114__
stringclasses
1 value
__index_level_115__
stringclasses
1 value
__index_level_116__
stringclasses
1 value
__index_level_117__
stringclasses
1 value
__index_level_118__
stringclasses
1 value
__index_level_119__
stringclasses
1 value
__index_level_120__
stringclasses
1 value
__index_level_121__
stringclasses
1 value
__index_level_122__
stringclasses
1 value
__index_level_123__
stringclasses
1 value
__index_level_124__
stringclasses
1 value
__index_level_125__
stringclasses
1 value
__index_level_126__
stringclasses
1 value
__index_level_127__
stringclasses
1 value
__index_level_128__
stringclasses
1 value
__index_level_129__
stringclasses
1 value
__index_level_130__
stringclasses
1 value
__index_level_131__
stringclasses
1 value
__index_level_132__
stringclasses
1 value
__index_level_133__
stringclasses
1 value
__index_level_134__
stringclasses
1 value
__index_level_135__
stringclasses
1 value
__index_level_136__
stringclasses
1 value
__index_level_137__
stringclasses
1 value
__index_level_138__
stringclasses
1 value
__index_level_139__
stringclasses
1 value
__index_level_140__
stringclasses
1 value
__index_level_141__
stringclasses
1 value
__index_level_142__
stringclasses
1 value
__index_level_143__
stringclasses
1 value
__index_level_144__
stringclasses
1 value
__index_level_145__
stringclasses
1 value
__index_level_146__
stringclasses
1 value
__index_level_147__
stringclasses
1 value
__index_level_148__
stringclasses
1 value
__index_level_149__
stringclasses
1 value
__index_level_150__
stringclasses
1 value
__index_level_151__
stringclasses
1 value
__index_level_152__
stringclasses
1 value
__index_level_153__
stringclasses
1 value
__index_level_154__
stringclasses
1 value
__index_level_155__
stringclasses
1 value
__index_level_156__
stringclasses
1 value
__index_level_157__
stringclasses
1 value
__index_level_158__
stringclasses
1 value
__index_level_159__
stringclasses
1 value
__index_level_160__
stringclasses
1 value
__index_level_161__
stringclasses
1 value
__index_level_162__
stringclasses
1 value
__index_level_163__
stringclasses
1 value
__index_level_164__
stringclasses
1 value
__index_level_165__
stringclasses
1 value
__index_level_166__
stringclasses
1 value
__index_level_167__
stringclasses
1 value
__index_level_168__
stringclasses
1 value
__index_level_169__
stringclasses
1 value
__index_level_170__
stringclasses
1 value
__index_level_171__
stringclasses
1 value
__index_level_172__
stringclasses
1 value
__index_level_173__
stringclasses
1 value
__index_level_174__
stringclasses
1 value
__index_level_175__
stringclasses
1 value
__index_level_176__
stringclasses
1 value
__index_level_177__
stringclasses
1 value
__index_level_178__
stringclasses
1 value
__index_level_179__
stringclasses
1 value
__index_level_180__
stringclasses
1 value
__index_level_181__
stringclasses
1 value
__index_level_182__
stringclasses
1 value
__index_level_183__
stringclasses
1 value
__index_level_184__
stringclasses
1 value
__index_level_185__
stringclasses
1 value
__index_level_186__
stringclasses
1 value
__index_level_187__
stringclasses
1 value
__index_level_188__
stringclasses
1 value
__index_level_189__
stringclasses
1 value
__index_level_190__
stringclasses
1 value
__index_level_191__
stringclasses
1 value
__index_level_192__
stringclasses
1 value
__index_level_193__
stringclasses
1 value
__index_level_194__
stringclasses
1 value
__index_level_195__
stringclasses
1 value
__index_level_196__
stringclasses
1 value
__index_level_197__
stringclasses
1 value
__index_level_198__
stringclasses
1 value
data sizes
and licenses.
We present QLORA
an efficient finetuning approach that reduces memory usage enough to finetune a 65B parameter model on a single 48GB GPU while preserving full 16-bit finetuning task performance. QLORA backpropagates gradients through a frozen
4-bit quantized pretrained language model into Low Rank Adapters (LoRA). Our best model family
which we name Guanaco
outperforms all previous openly released models on the Vicuna benchmark
reaching 99.3% of the performance level of ChatGPT while only requiring 24 hours of finetuning on a single GPU. QLORA introduces a number of innovations to save memory without sacrificing performance: (a) 4-bit NormalFloat (NF4)
a new data type that is information theoretically optimal for normally distributed weights (b) Double Quantization to reduce the average memory footprint by quantizing the quantization constants
and (c) Paged Optimizers to manage memory spikes. We use QLORA to finetune more than 1
000 models
providing a detailed analysis of instruction following and chatbot performance across 8 instruction datasets
multiple model types (LLaMA
T5)
and model scales that would be infeasible to run with regular finetuning (e.g. 33B and 65B parameter models). Our results show that QLoRA finetuning on a small high-quality dataset leads to state-of-the-art results
even when using smaller models than the previous SoTA. We provide a detailed analysis of chatbot performance based on both human and GPT-4 evaluations showing that GPT-4 evaluations are a cheap and reasonable alternative to human evaluation. Furthermore
we find that current chatbot benchmarks are not trustworthy to accurately evaluate the performance levels of chatbots. A lemon-picked analysis demonstrates where Guanaco fails compared to ChatGPT. We release all of our models and code
including CUDA kernels for 4-bit training. 2 * Equal contribution. 2\nIntroduction\nFinetuning large language models (LLMs) is a highly effective way to improve their performance
[40
62
43
61
59
37] and to add desirable or remove undesirable behaviors [43
2
4]. However
finetuning very large models is prohibitively expensive; regular 16-bit finetuning of a LLaMA 65B parameter model [57] requires more than 780 GB of GPU memory. While recent quantization methods can reduce the memory footprint of LLMs [14
13
18
66]
such techniques only work for inference and break down during training [65].\n[40
\n62
\n43
\n61
\n59
\n37]\n[43
\n2
\n4]\n[57]\n[14
\n13
\n18
\n66]\n[65]\nWe demonstrate for the first time that it is possible to finetune a quantized 4-bit model without any performance degradation. Our method
QLORA
uses a novel high-precision technique to quantize a pretrained model to 4-bit
then adds a small set of learnable Low-rank Adapter weights [28] Table 1: Elo ratings for a competition between models
averaged for 10
000 random initial orderings. The winner of a match is determined by GPT-4 which declares which response is better for a given prompt of the the Vicuna benchmark. 95% confidence intervals are shown (±). After GPT-4
Guanaco 33B and 65B win the most matches
while Guanaco 13B scores better than Bard.\n[28]\n1\nModel\nSize QLORA reduces the average memory requirements of finetuning a 65B parameter model from >780GB of GPU memory to <48GB without degrading the runtime or predictive performance compared to a 16bit fully finetuned baseline. This marks a significant shift in accessibility of LLM finetuning: now the largest publicly available models to date finetunable on a single GPU. Using QLORA
we train the Guanaco family of models
with the second best model reaching 97.8% of the performance level of ChatGPT on the Vicuna [10] benchmark
while being trainable in less than 12 hours on a single consumer GPU; using a single professional GPU over 24 hours we achieve 99.3% with our largest model
essentially closing the gap to ChatGPT on the Vicuna benchmark. When deployed
our smallest Guanaco model (7B parameters) requires just 5 GB of memory and outperforms a 26 GB Alpaca model by more than 20 percentage points on the Vicuna benchmark (Table 6).\n[10]\n6\nQLORA introduces multiple innovations designed to reduce memory use without sacrificing performance: (1) 4-bit NormalFloat
an information theoretically optimal quantization data type for normally distributed data that yields better empirical results than 4-bit Integers and 4-bit Floats.\n(2) Double Quantization
a method that quantizes the quantization constants
saving an average of about 0.37 bits per parameter (approximately 3 GB for a 65B model).\n(3) Paged Optimizers
using NVIDIA unified memory to avoid the gradient checkpointing memory spikes that occur when processing a mini-batch with a long sequence length. We combine these contributions into a better tuned LoRA approach that includes adapters at every network layer and thereby avoids almost all of the accuracy tradeoffs seen in prior work.\nQLORA's efficiency enables us to perform an in-depth study of instruction finetuning and chatbot performance on model scales that would be impossible using regular finetuning due to memory overhead. Therefore
we train more than 1
000 models across several instruction tuning datasets
model architectures
and sizes between 80M to 65B parameters. In addition to showing that QLORA recovers 16-bit performance ( §4) and training a state-of-the-art chatbot
Guanaco
( §5)
we also analyze trends in the trained models. First
we find that data quality is far more important than dataset size
e.g.
a 9k sample dataset (OASST1) outperformed a 450k sample dataset (FLAN v2
subsampled) on chatbot performance
even when both are meant to support instruction following generalization. Second
we show that strong Massive Multitask Language Understanding (MMLU) benchmark performance does not imply strong Vicuna chatbot benchmark performance and vice versa-in other words
dataset suitability matters more than size for a given task.\nFurthermore
we also provide a extensive analysis of chatbot performance that uses both human raters and GPT-4 for evaluation. We use tournament-style benchmarking where models compete against each other in matches to produce the best response for a given prompt. The winner of a match is judged by either GPT-4 or human annotators. The tournament results are aggregated into Elo scores [16
17] which determine the ranking of chatbot performance. We find that GPT-4 and human evaluations largely agree on the rank of model performance in the tournaments
but we also find there are instances of strong disagreement. As such
we highlight that model-based evaluation while providing a cheap alternative to human-annotation also has its uncertainties.\n[16
\n17]\nWe augment our chatbot benchmark results with a qualitative analysis of Guanaco models. Our analysis highlights success and failure cases that were not captured by the quantitative benchmarks.\nWe release all model generations with human and GPT-4 annotations to facilitate further study. We open-source our codebase and CUDA kernels and integrate our methods into the Hugging Face transformers stack [64]
making them easily accessible to all. We release a collection of adapters for 7/13/33/65B size models
trained on 8 different instruction following datasets
for a total of 32 different open sourced
finetuned models.\n[64]\nBackground\nBlock-wise k-bit Quantization Quantization is the process of discretizing an input from a representation that holds more information to a representation with less information. It often means taking a data type with more bits and converting it to fewer bits
for example from 32-bit floats to 8-bit Integers. To ensure that the entire range of the low-bit data type is used
the input data type is commonly rescaled into the target data type range through normalization by the absolute maximum of the input elements
which are usually structured as a tensor. For example
quantizing a 32-bit Floating Point (FP32) tensor into a Int8 tensor with range [-127
127]:\nwhere c is the quantization constant or quantization scale. Dequantization is the inverse:\nThe problem with this approach is that if a large magnitude value (i.e.
an outlier) occurs in the input tensor
then the quantization bins-certain bit combinations-are not utilized well with few or no numbers quantized in some bins. To prevent the outlier issue
a common approach is to chunk the input tensor into blocks that are independently quantized
each with their own quantization constant c. This can be formalized as follows: We chunk the input tensor X ∈ R b×h into n contiguous blocks of size B by flattening the input tensor and slicing the linear segment into n = (b × h)/B blocks. We quantize these blocks independently with Equation 1 to create a quantized tensor and n quantization constants c i .\nLow-rank Adapters Low-rank Adapter (LoRA) finetuning [28] is a method that reduces memory requirements by using a small set of trainable parameters
often termed adapters
while not updating the full model parameters which remain fixed. Gradients during stochastic gradient descent are passed through the fixed pretrained model weights to the adapter
which is updated to optimize the loss function. LoRA augments a linear projection through an additional factorized projection. Given a projection XW = Y with X ∈ R b×h
W ∈ R h×o LoRA computes:\n[28]\nwhere L 1 ∈ R h×r and L 2 ∈ R r×o
and s is a scalar.\nMemory Requirement of Parameter-Efficient Finetuning One important point of discussion is the memory requirement of LoRA during training both in terms of the number and size of adapters used. Since the memory footprint of LoRA is so minimal
we can use more adapters to improve performance without significantly increasing the total memory used. While LoRA was designed as a Parameter Efficient Finetuning (PEFT) method
most of the memory footprint for LLM finetuning comes from activation gradients and not from the learned LoRA parameters. For a 7B LLaMA model trained on FLAN v2 with a batch size of 1
with LoRA weights equivalent to commonly used 0.2% of the original model weights [28
37]
the LoRA input gradients have a memory footprint of 567 MB while the LoRA parameters take up only 26 MB. With gradient checkpointing [9]
the input gradients reduce to an average of 18 MB per sequence making them more memory intensive than all LoRA weights combined. In comparison
the 4-bit base model consumes 5
048 MB of memory. This highlights that gradient checkpointing is important but also that aggressively reducing the amount of LoRA parameter yields only minor memory benefits. This means we can use more adapters without significantly increasing the overall training memory footprint (see Appendix G for a detailed breakdown). As discussed later
this is crucial for recovering full 16-bit precision performance.\n[28
\n37]\n[9]\nQLORA Finetuning\nQLORA achieves high-fidelity 4-bit finetuning via two techniques we propose-4-bit NormalFloat (NF4) quantization and Double Quantization. Additionally
we introduce Paged Optimizers
to prevent memory spikes during gradient checkpointing from causing out-of-memory errors that have traditionally made finetuning on a single machine difficult for large models.\nQLORA has one low-precision storage data type
in our case usually 4-bit
and one computation data type that is usually BFloat16. In practice
this means whenever a QLORA weight tensor is used
we dequantize the tensor to BFloat16
and then perform a matrix multiplication in 16-bit.\nWe now discuss the components of QLORA followed by a formal definition of QLORA.\n4-bit NormalFloat Quantization\nThe NormalFloat (NF) data type builds on Quantile Quantization [15] which is an information-theoretically optimal data type that ensures each quantization bin has an equal number of values assigned from the input tensor. Quantile quantization works by estimating the quantile of the input tensor through the empirical cumulative distribution function.\n[15]\nThe main limitation of quantile quantization is that the process of quantile estimation is expensive. Therefore fast quantile approximation algorithms
such as SRAM quantiles [15]
are used to estimate them. Due to the approximate nature of these quantile estimation algorithms
the data type has large quantization errors for outliers
which are often the most important values.\n[15]\nExpensive quantile estimates and approximation errors can be avoided when input tensors come from a distribution fixed up to a quantization constant. In such cases
input tensors have the same quantiles making exact quantile estimation computationally feasible.Since pretrained neural network weights usually have a zero-centered normal distribution with standard deviation σ (see Appendix F)
we can transform all weights to a single fixed distribution by scaling σ such that the distribution fits exactly into the range of our data type. For our data type
we set the arbitrary range [-1
1]. As such
both the quantiles for the data type and the neural network weights need to be normalized into this range.\nThe information theoretically optimal data type for zero-mean normal distributions with arbitrary standard deviations σ in the range [-1
1] is computed as follows: (1) estimate the 2 k + 1 quantiles of a theoretical N (0
1) distribution to obtain a k-bit quantile quantization data type for normal distributions
(2) take this data type and normalize its values into the [-1
1] range
(3) quantize an input weight tensor by normalizing it into the [-1
1] range through absolute maximum rescaling.\nOnce the weight range and data type range match
we can quantize as usual.\nStep (3) is equivalent to rescaling the standard deviation of the weight tensor to match the standard deviation of the k-bit data type. More formally
we estimate the 2 k values q i of the data type as follows:\nwhere Q X (•) is the quantile function of the standard normal distribution N (0
1). A problem for a symmetric k-bit quantization is that this approach does not have an exact representation of zero
which is an important property to quantize padding and other zero-valued elements with no error. To ensure a discrete zeropoint of 0 and to use all 2 k bits for a k-bit datatype
we create an asymmetric data type by estimating the quantiles q i of two ranges q i : 2 k-1 for the negative part and 2 k-1 + 1 for the positive part and then we unify these sets of q i and remove one of the two zeros that occurs in both sets. We term the resulting data type that has equal expected number of values in each quantization bin k-bit NormalFloat (NFk)
since the data type is information-theoretically optimal for zero-centered normally distributed data. The exact values of this data type can be found in Appendix E.\nDouble Quantization\nWe introduce Double Quantization (DQ)
the process of quantizing the quantization constants for additional memory savings. While a small blocksize is required for precise 4-bit quantization [13]
it also has a considerable memory overhead. For example
using 32-bit constants and a blocksize of 64 for W
quantization constants add 32/64 = 0.5 bits per parameter on average. Double Quantization helps reduce the memory footprint of quantization constants.\n[13]\nMore specifically
Double Quantization treats quantization constants c FP32 2 of the first quantization as inputs to a second quantization. This second step yields the quantized quantization constants c FP8 2 and the second level of quantization constants c FP32 1 . We use 8-bit Floats with a blocksize of 256 for the second quantization as no performance degradation is observed for 8-bit quantization
in line with results from Dettmers and Zettlemoyer [13]. Since the c FP32 2 are positive
we subtract the mean from c 2 before quantization to center the values around zero and make use of symmetric quantization. On average
for a blocksize of 64
this quantization reduces the memory footprint per parameter from 32/64 = 0.5 bits
to 8/64 + 32/(64 • 256) = 0.127 bits
a reduction of 0.373 bits per parameter.\n[13]\nPaged Optimizers use the NVIDIA unified memory3 feature wich does automatic page-to-page transfers between the CPU and GPU for error-free GPU processing in the scenario where the GPU occasionally runs out-of-memory. The feature works like regular memory paging between CPU RAM and the disk. We use this feature to allocate paged memory for the optimizer states which are then automatically evicted to CPU RAM when the GPU runs out-of-memory and paged back into GPU memory when the memory is needed in the optimizer update step.\n3\nQLORA. Using the components described above
we define QLORA for a single linear layer in the quantized base model with a single LoRA adapter as follows:\nwhere doubleDequant(•) is defined as:\nWe use NF4 for W and FP8 for c 2 . We use a blocksize of 64 for W for higher quantization precision and a blocksize of 256 for c 2 to conserve memory.\nFor parameter updates only the gradient with respect to the error for the adapters weights ∂E ∂Li are needed
and not for 4-bit weights ∂E ∂W . However
the calculation of ∂E ∂Li entails the calculation of ∂X ∂W which proceeds via equation ( 5) with dequantization from storage W NF4 to computation data type W BF16 to calculate the derivative ∂X ∂W in BFloat16 precision. To summarize
QLORA has one storage data type (usually 4-bit NormalFloat) and a computation data type . We dequantize the storage data type to the computation data type to perform the forward and backward pass
but we only compute weight gradients for the LoRA parameters which use 16-bit BrainFloat.\n5\nQLoRA vs. Standard Finetuning\nWe have discussed how QLoRA works and how it can significantly reduce the required memory for finetuning models. The main question now is whether QLoRA can perform as well as full-model finetuning. Furthermore
we want to analyze the components of QLoRA including the impact of NormalFloat4 over standard Float4. The following sections will discuss the experiments that aimed at answering these questions.\nExperimental setup. We consider three architectures (encoder
encoder-decoder
and decoder only) and compare QLoRA with 16-bit adapter-finetuning and with full-finetuning for models up to 3B. Our evaluations include GLUE [58] with RoBERTa-large [38]
Super-NaturalInstructions (TKInstruct) [61] with T5 [49]
and 5-shot MMLU [24] after finetuning LLaMA on Flan v2 [39] and Alpaca [55]. To additionally study the advantages of NF4 over other 4-bit data types
we use the setup of Dettmers and Zettlemoyer [13] and measure post-quantization zero-shot accuracy and perplexity across different models (OPT [72]
LLaMA [57]
BLOOM [52]
Pythia [7]) for model sizes 125m -13B. We provide more details in the results section for each particular setup to make the results more readable. Full details in Appendix A. Using LoRA on all transformer layers is critical to match 16-bit performance.\n[58]\n[38]\n[61]\n[49]\n[24]\n[39]\n[55]\n[13]\n[72]\n[57]\n[52]\n[7]\nWhile paged optimizers are critical to do 33B/65B QLORA tuning on a single 24/48GB GPU
we do not provide hard measurements for Paged Optimizers since the paging only occurs when processing mini-batches with long sequence lengths
which is rare. We do
however
perform an analysis of the runtime of paged optimizers for 65B models on 48GB GPUs and find that with a batch size of 16
paged optimizers provide the same training speed as regular optimizers. Future work should measure and characterize under what circumstances slowdowns occur from the paging process.\nDefault LoRA hyperparameters do not match 16bit performance When using the standard practice of applying LoRA to query and value attention projection matrices [28]
we are not able to replicate full finetuning performance for large base models. As shown in Figure 2 for LLaMA 7B finetuning on Alpaca
we find that the most critical LoRA hyperparameter is how many LoRA adapters are used in total and that LoRA on all linear transformer block layers are required to match full finetuning performance. Other LoRA hyperparameters
such as the projection dimension r
do not affect performance (see Appendix A). Similarly
we find that default hyperparameters for fully finetuned baselines are undertuned. We do a hyperparameter search over learning rates 1e-6 to 5e-5 and batch sizes 8 to 128 to find robust baselines.\n[28]\n2\nResults for 7B LLaMA finetuning on Alpaca are shown in Figure 2.\n2\n4-bit NormalFloat yields better performance than 4-bit Floating Point While the 4-bit NormalFloat (NF4) data type is informationtheoretically optimal
it still needs to be determined if this property translates to empirical advantages. We follow the setup from Dettmers and Zettlemoyer [13] where quantized LLMs (OPT [72]
BLOOM [52]
Pythia [7]
LLaMA) of different sizes (125M to 65B) with different data types are evaluated on language modeling and a set of zero-shot tasks. In Figure 3 and possible
but leads to performance degradation relative to 16-bit [13
18]. This raises the crucial question of whether the lost performance can be recovered by conducting 4-bit adapter finetuning. We test this for two setups. The first focuses on a comparison with full 16-bit finetuning of RoBERTA and T5 models sized 125M to 3B parameters on GLUE and the Super-NaturalInstructions dataset. Results are shown in Table 3. In both datasets
we observe that 16-bit
8-bit
and 4-bit adapter methods replicate the performance of the fully finetuned 16-bit baseline. This suggests that the performance lost due to the imprecise quantization can be fully recovered through adapter finetuning after quantization.\n[13]\n[72]\n[52]\n[7]\n3\n[13
\n18]\n3\nFor our second setup
since full finetuning models at and beyond 11B parameters requires more than one server of high memory GPUs
we continue to test whether 4-bit QLORA can match 16-bit LoRA at the 7B to 65B parameter scales. To this end
we finetune LLaMA 7B through 65B on two instruction following datasets
Alpaca and FLAN v2
and evaluate on the MMLU benchmark via 5-shot accuracy. Results are shown in Table 4 where we see that NF4 with double quantization fully recovers the 16-bit LoRA MMLU performance. In addition
we also note that QLORA with FP4 lags behind the 16-bit brain float LoRA baseline by about 1 percentage point. This corroborates both our findings that (1) QLORA with NF4 replicates both 16-bit full finetuning and 16-bit LoRA finetuning performance
and (2) NF4 is superior to FP4 in terms of quantization precision.\n4\nSummary Our results consistently show that 4-bit QLORA with NF4 data type matches 16bit full finetuning and 16-bit LoRA finetuning performance on academic benchmarks with wellestablished evaluation setups. We have also shown that NF4 is more effective than FP4 and that double quantization does not degrade performance. Combined
this forms compelling evidence that 4-bit QLORA tuning reliably yields results matching 16-bit methods.\nIn line with previous work on quantization [13]
our MMLU and Elo results indicate that with a given finetuning and inference resource budget it is beneficial to increase the number of parameters in the base model while decreasing their precision. This highlights the importance of efficiency benefits from QLORA. Since we did not observe performance degradation compared to full-finetuning in our experiments with 4-bit finetuning
this raises the question of where the performance-precision trade-off exactly lies for QLoRA tuning
which we leave to future work to explore.\n[13]\nWe proceed to investigate instruction tuning at scales that would be impossible to explore with full 16-bit finetuning on academic research hardware.\n5 Pushing the Chatbot State-of-the-art with QLoRA\nHaving established that 4-bit QLORA matches 16-bit performance across scales
tasks
and datasets we conduct an in-depth study of instruction finetuning up to the largest open-source language models available for research. To assess the performance of instruction finetuning these models
we evaluate on a challenging Natural Language Understanding benchmark (MMLU) and develop new methods for real-world chatbot performance evaluation.\nExperimental setup\nWe now describe an overview of the experimental setup with full details in Appendix B.\nData As
to our knowledge
there is no comprehensive study of recent instruction-following datasets
we select eight recent datasets. We include datasets obtained through crowd-sourcing (OASST1 [31]
HH-RLHF [4])
distillation from instruction-tuned models (Alpaca [55]
self-instruct [59]
unnaturalinstructions [26])
corpora aggregations (FLAN v2 [12])
as well as hybrids (Chip2 [32]
Longform [30]). These datasets cover different languages
null
null
Training Setup To avoid confounding effects from different training objectives
we perform QLoRA finetuning with cross-entropy loss (supervised learning) without reinforcement learning
even for datasets that include human judgments of different responses. For datasets that have a clear distinction between instruction and response
we finetune only on the response (see ablations in Appendix B). For OASST1 and HH-RLHF
multiple responses are available. We then select the top response at every level of the conversation tree and finetune on the full selected conversation
including the instructions. In all of our experiments
we use NF4 QLORA with double quantization and paged optimizers to prevent memory spikes during gradient checkpointing. We do small hyperparameter searches for the 13B and 33B LLaMA models and we find that all hyperparameter settings found at 7B generalize (including number of epochs) except learning rate and batch size. We halve the learning rate for 33B and 65B while doubling the batch size.\nBaselines We compare our models to both research (Vicuna [10] and Open Assistant [31]) and commercial (GPT-4 [42]
GPT-3.5-turbo and Bard) chatbot systems. The Open Assistant model is a LLaMA 33B model finetuned with Reinforcement Learning from Human Feedback (RLHF) on the same OASST1 dataset that we experiment with. Vicuna does full fine-tuning of LLaMA 13B on proprietary user-shared conversations from ShareGPT and is thus the result of distillation from OpenAI GPT models. Following common practice
we use the MMLU (Massively Multitask Language Understanding) benchmark [24] to measure performance on a range of language understanding tasks. This is a multiple-choice benchmark covering 57 tasks including elementary mathematics
US history
computer science
law
and more. We report 5-shot test accuracy.\n[10]\n[31]\n[42]\n[24]\nEvaluation\nWe also test generative language capabilities through both automated and human evaluations. This second set of evaluations relies on queries curated by humans and aims at measuring the quality of model responses.\nWhile this is a more realistic testbed for chatbot model performance and is growing in popularity
there is no commonly accepted protocol in the literature. We describe below our proposed setup
using nucleus sampling with p = 0.9 and temperature 0.7 in all cases.\nBenchmark Data We evaluate on two curated datasets of queries (questions): the Vicuna prompts [10] and the OASST1 validation dataset [31]. We use the Vicuna prompts
a set of 80 prompts from a diverse set of categories
without modifications. The OASST1 dataset is a multilingual collection of crowd-sourced multiturn dialogs between a user and an assistant. We select all user messages in the validation dataset as queries and include previous turns in the prompt. This procedure leads to 953 unique user queries. We term these two datasets the Vicuna and OA benchmarks.\n[10]\n[31]\nAutomated Evaluation First
based on the evaluation protocol introduced by Chiang et al. [10]
we use GPT-4 to rate the performance of different systems against ChatGPT (GPT-3.5 Turbo) on the Vicuna benchmark. Given a query along with ChatGPT's and a model's responses
GPT-4 is prompted to assign a score out of ten to both responses and provide an explanation. The overall performance of a model is calculated as a percentage of the score that ChatGPT achieved. Note this relative score can be higher than 100% if the model achieves a higher absolute score than ChatGPT. We find a significant ordering effect with GPT-4 increasing the score of the response occurring earlier in the prompt. To control for such effects
we recommend reporting the mean score over both orders.\n[10]\nNext
we measure performance through direct comparisons between system outputs. We simplify the rating scheme to a three-class labeling problem that accounts for ties. We prompt GPT-4 to pick the best response or declare a tie and provide an explanation. We conduct these head-to-head comparisons on all permutations of pairs of systems on both the Vicuna and OA benchmarks.\nHuman Evaluation While recent work indicates generative models can be effectively employed for system evaluations [19]
the reliability GPT-4 ratings to assess chatbot performance is
to our knowledge
yet to be proven to correlate with human judgments. Therefore
we run two parallel human evaluations on the Vicuna benchmark matching both automated evaluation protocols described above. We use Amazon Mechanical Turk (AMT) and get two human annotators for comparisons to ChatGPT and three annotators for pairwise comparisons.\n[19]\nElo Rating With both human and automated pairwise comparisons
we create a tournament-style competition where models compete against each other. The tournament is made up of matches where pairs of models compete to produce the best response for a given prompt. This is similar to how Bai et al. [4] and Chiang et al. [10] compare models
but we also employ GPT-4 ratings in addition to human ratings. We randomly sample from the set of labeled comparisons to compute Elo [16
17]. Elo rating
which is widely used in chess and other games
is a measure of the expected win-rate relative to an opponent's win rate
for example
an Elo of 1100 vs 1000 means the Elo 1100 player has an expected win-rate of approximately 65% against the Elo 1000 opponent; a 1000 vs 1000 or 1100 vs 1100 match results in an expected win-rate of 50%. The Elo rating changes after each match proportionally to the expected outcome
that is
an unexpected upset leads to a large change in Elo rating while an expected outcome leads to a small change. Over time
Elo ratings approximately match the skill of each player at playing the game. We start with a score of 1
000 and use K = 32. Similar to Chiang et al. [10]
we repeat this procedure 10
000 times with different random seeds to control for ordering effects
e.g.
the effect of which model pairs compete with each other first.\n[4]\n[10]\n[16
\n17]\n[10]\nGuanaco: QLORA trained on OASST1 is a State-of-the-art Chatbot\nBased on our automated and human evaluations
we find that the top QLORA tuned model
Guanaco 65B
which we finetune on a variant of OASST1
is the best-performing open-source chatbot model and offers performance competitive to ChatGPT. When compared to GPT-4
Guanaco 65B and 33B have an expected win probability of 30%
based on Elo rating from human annotators system-level pairwise comparisons -the highest reported to date.\nThe Vicuna benchmark [10] results relative to ChatGPT are shown in Table 6. We find that Guanaco 65B is the best-performing model after GPT-4
achieving 99.3% performance relative to ChatGPT. Guanaco 33B has more parameters than the Vicuna 13B model
but uses only 4-bit precision for its weights and is thus much more memory efficient at 21 GB vs 26 GB
providing a three percentage points of improvement over Vicuna 13B. Furthermore
Guanaco 7B easily fits on modern phones at a 5 GB footprint while still scoring nearly 20 percentage points higher than Alpaca 13B.\n[10]\n6\nHowever
Table 6 also has very wide confidence intervals
with many models overlapping in performance. We hypothesize that this uncertainty comes from the lack of clear specification of scale
e.g.
it is unclear what 8 on a 10 point scale means across different scenarios. As such
we instead recommend using the Elo ranking method [16]
based on pairwise judgments from human annotators and GPT-4 to avoid the problem of grounding an absolute scale. Elo ratings of the most competitive models can be seen in Table 1. We note that human and GPT-4 ranking of models on the Vicuna benchmark disagree partially
particularly for Guanaco 7B
but are consistent for most models with a Kendall Tau of τ = 0.43 and Spearman rank correlation of r = 0.55 at the system level. At the example level
the agreement between GPT-4 and human annotators' majority vote is weaker with Fleiss κ = 0.25. Overall
this shows a moderate agreement between system-level judgments by GPT-4 and human annotators
and thus that model-based evaluation represents a somewhat reliable alternative to human evaluation. We discuss further considerations in Section 6.2.\n6\n[16]\n1\nElo rankings in Table 7 indicate that Guanaco 33B and 65B models outperform all models besides GPT-4 on the Vicuna and OA benchmarks and that they perform comparably to ChatGPT in line with Table 6. We note that the Vicuna benchmark favors open-source models while the larger OA benchmark favors ChatGPT. Furthermore
we can see from Tables 5 and6 that the suitability of a finetuning dataset is a determining factor in performance. Finetuning Llama models on FLAN v2 does particularly well on MMLU
but performs worst on the Vicuna benchmark (similar trends are observed with other models). This also points to partial orthogonality in current evaluation benchmarks: strong MMLU performance does not imply strong chatbot performance (as measured by Vicuna or OA benchmarks) and vice versa.\n7\n6\n5\n6\nGuanaco is the only top model in our evaluation that is not trained on proprietary data as the OASST1 dataset collection guidelines explicitly forbid the use of GPT models. The next best model trained on only open-source data is the Anthropic HH-RLHF model
which scores 30 percentage points lower than Guanaco on the Vicuna benchmark (see Table 6). Overall
these results show that 4-bit QLORA is effective and can produce state-of-the-art chatbots that rival ChatGPT. Furthermore
our 33B Guanaco can be trained on 24 GB consumer GPUs in less than 12 hours. This opens up the potential for future work via QLORA tuning on specialized open-source data
which produces models that can compete with the very best commercial models that exist today.\n6\nQualitative Analysis\nWhile quantitative analysis is the core of our evaluation
there are a number of issues with only looking at summary statistics. Perhaps the largest is the problem of benchmark validity [36]-whether a benchmark truly tests what its name or description suggests is always at question
especially as we discover \"shortcuts\" to solve benchmarks that machine learning models sometimes exploit [22
46]. To partially alleviate this
we here perform some qualitative analysis
in two sections. First
in §6.1 we show some examples that we believe are representative of some observed patterns in the text generated by our 65b Guanaco model. Second
§6.2 we detail considerations about the results we have discussed and our interpretation of them.\n[36]\n[22
\n46]\nQualitative Analysis of Example Generations\nTo find examples
we first go through data generated for the Vicuna benchmark and the OpenAssistant benchmark
and look for patterns in the answers Guanaco generates. When we notice a pattern we attempt to setup a question or prompt that will induce the pattern even though it is the incorrect solution
e.g.
if we observe that the model tends to give long-winded answers we prompt the model to \"Answer yes or no without explanation.\" We use this to find \"lemons\" where we manage to adversarially break the model and \"cherries\" where we fail to break the model
and present both. All generations in this section were generated with Nucleus Sampling [25] with p = 0.9.\n[25]\nOf course
this is by no means comprehensive
since it is beyond the scope of this small qualitative study to control for all the variables involved
e.g.
the full distribution of responses the model can generate for a given prompt is quite large
so we rely on samples we hope are representative. However
we believe describing these examples gives context to the quantitative evidence shown earlier in the paper. Since we open source all models and code
we hope this section will inspire future work to examine in more detail the issues we present here.\nFactual Recall For questions such as \"What is the capital of Zambia?\" all models consistently generate correct answers
e.g.
\nGuanaco:\nThe capital of Zambia is Lusaka.\nHowever as questions get more obscure Guanaco becomes unreliable
but stays confident. For instance in response to this prompt from HotPotQA [68]:\n[68]\nIn what year was the singer who popularized 'I'll Keep the Lovelight Burning' born?\nGuanaco:\nAl Jolson is credited with popularizing the song 'I'll Keep the Lovelight Burning
' and he was born in the year 1886.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
The memory footpring for QLoRA training with different LLaMA base models can be seen in Figure 6. We see that the 33B model does not quite fit into a 24 GB and that paged optimizers are needed to train it. Depicted is also batch size 1 with a sequence length of 512 and gradient checkpointning. This means
if one uses a larger batch size
or if a long sequence is processed
the activation gradient might consume a considerable amount of memory.
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null
null

No dataset card yet

New: Create and edit this dataset card directly on the website!

Contribute a Dataset Card
Downloads last month
0
Add dataset card