text
stringlengths 84
944
|
---|
page_content='0 2500 5000 7500 10000 12500 15000\nRelative Position0.00.20.40.60.81.0ProbabilityOriginal\n2 chunks\n3 chunks\nRandPos\nFigure 5: Coverage probability for each relative position in a single training example (2k -> 16k).\nUtilizing multiple chunks reduces coverage probability within the original [0,2,048] context window,\nwhile enhancing the coverage likelihood of relative positions in the range of [2,048,16,383]. Proba-' metadata={'source': 'pdfs/paper_2.pdf', 'page': 12} |
page_content='bility of coverage increases with the number of chunks. Pushing the chunk number to the limit is\nRandPos, utilizing 2048 chunks, capable of covering every relative position in each training example\nby expectation.\nPoSE achieves coverage of all positions within the target context window by randomly sampling\nthe chunk sizes and skipping bias terms for each training example. In this section, we explore the' metadata={'source': 'pdfs/paper_2.pdf', 'page': 12} |
page_content='probability of each relative position being covered by a training example, using a context extension\nof 2,048 to 16,384 as an example. For the unextended original version, the probability of a relative\nposition within 2048 being covered is 1, and the probability of a relative position above 2,048 being\ncovered is 0. For the cases where the number of chunks is 2, 3, or 2,048 (i.e., RandPos), we use the\n13' metadata={'source': 'pdfs/paper_2.pdf', 'page': 12} |
page_content='visit_prob_list = np.array ([0] * Lt)\niter_times = 10000\nfor_ inrange(iter_times ):\nl0 = random.randint (1, Lc-1)\nu1 = random.randint (0, Lt-Lc)\nl1 = Lc -l0\nrng1 = set(range( 1, max(l0,l1))\nrng2 = set(range(u1+ 1, u1+))\nrng= rng1 | rng2\nforx inrng:\nvisit_prob_list [x] += 1\nvisit_prob_list /= iter_timesvisit_prob_list = np.array ([0] * Lt)\niter_times = 10000\nfor_ inrange(iter_times ):\nl0 = random.randint (1, Lc-2)\nl1 = random.randint (1, Lc-l0-1)\nl2 = Lc -l0 -l1\nu1 = random.randint (0, Lt-Lc)' metadata={'source': 'pdfs/paper_2.pdf', 'page': 13} |
page_content='u2 = random.randint (u1, Lt-Lc)\nrng1 = set(range( 1, max(l0,l1,l2)))\nrng2 = set(range(u1+ 1, u1+l0+l1))\nrng3 = set(range(u2 -u1+1, u2-\nu1+l1+l2))\nrng4 = set(range(u2+l1+ 1, u2+Lc))\nrng= rng1 | rng2 | rng3 | rng4\nforx inrng:\nvisit_prob_list [x] += 1\nvisit_prob_list / = iter_timesvisit_prob_list = np.array ([0] * Lt)\niter_times = 100\nfor_ inrange(iter_times ):\ntot_pos_list = list(range(Lt))\nnew_pos_list = random.sample (tot_pos_list , Lc)\nnew_pos_list.sort ()\ndistance_rng = set()' metadata={'source': 'pdfs/paper_2.pdf', 'page': 13} |
page_content='foriinrange(0, len(new_pos_list )-1):\nforj inrange(i+ 1, len(new_pos_list )):\ndistance_rng.add (new_pos_list [j] -\nnew_pos_list [i])\nforx indistance_rng :\nvisit_prob_list [x] += 1\nvisit_prob_list /= iter_timesFigure 6: Python Code used for calculating coverage probability of each relative position in Figure 5.\nTable 5: Comparison of different chunk numbers. We report perplexity with evaluation context' metadata={'source': 'pdfs/paper_2.pdf', 'page': 13} |
page_content='window ranging from 2k to 16k. By increasing chunk number, relative positions in [2,048,16,383]\nreceive an increased chance of being trained, rendering better results for context extension. However,\nextremely large chunk number also damages model performance.\nChunk numberProof-pile\n2k 4k 8k 16k\n1 2.83 >103>103>103\n2 2.95 2.74 2.61 2.60\n3 2.93 2.72 2.60 2.59\n2048 7.26 6.83 6.76 7.73\nMonte Carlo method to estimate this coverage probability. The code used is demonstrated in Figure 6.' metadata={'source': 'pdfs/paper_2.pdf', 'page': 13} |
page_content='The estimated results are shown in Figure 5. It can be seen that PoSE reduces the coverage probability\nof positions within the original context window, while all relative positions in [2,048,16,383]\nreceives a certain increase in chance of being covered, and the probability of coverage increases as\nthe number of chunks increases. For the case where the number of chunks is equal to 2,048, the\nprobability of each relative position being covered is close to 1. With this observation, we further' metadata={'source': 'pdfs/paper_2.pdf', 'page': 13} |
page_content='compare the impact of chunk number on language modeling capability, as presented in Table 5.\nIncreasing chunk number efficiently renders better results for context extension. However, extremely\nlarge chunk number also damages model performance, due to the severe deviation from the position\nencoding structure used in pre-training phase. We believe that the choice of the number of chunks is\na trade-off between training efficiency and performance.\n14' metadata={'source': 'pdfs/paper_2.pdf', 'page': 13} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nZichang Liu1Jue Wang2Tri Dao3Tianyi Zhou4Binhang Yuan5Zhao Song6Anshumali Shrivastava1\nCe Zhang5Yuandong Tian7Christopher Ré3Beidi Chen8 7\nAbstract\nLarge language models (LLMs) with hundreds of\nbillions of parameters have sparked a new wave\nof exciting AI applications. However, they are\ncomputationally expensive at inference time. Spar-\nsity is a natural approach to reduce this cost, but' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0} |
page_content='existing methods either require costly retraining,\nhave to forgo LLM’s in-context learning ability, or\ndo not yield wall-clock time speedup on modern\nhardware. We hypothesize that contextual sparsity ,\nwhich are small, input-dependent sets of attention\nheads and MLP parameters that yield approxi-\nmately the same output as the dense model for a\ngiven input, can address these issues. We show that\ncontextual sparsity exists, that it can be accurately' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0} |
page_content='predicted, and that we can exploit it to speed up\nLLM inference in wall-clock time without compro-\nmising LLM’s quality or in-context learning ability.\nBased on these insights, we propose DEJAVU , a\nsystem that uses a low-cost algorithm to predict\ncontextual sparsity on the fly given inputs to each\nlayer, along with an asynchronous and hardware-\naware implementation that speeds up LLM\ninference. We validate that DEJAVU can reduce the\ninference latency of OPT-175B by over 2 ×com-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0} |
page_content='pared to the state-of-the-art FasterTransformer,\nand over 6 ×compared to the widely used Hugging\nFace implementation, without compromising\nmodel quality. The code is available at https:\n//github.com/FMInference/DejaVu .\n1 Introduction\nLarge language models (LLMs), such as GPT-3, PaLM,\nand OPT have demonstrated that an immense number of\n1Rice University2Zhe Jiang University3Stanford Uni-\nversity4University of California, San Diego5ETH Zurich\n6Adobe Research7Meta AI (FAIR)8Carnegie Mellon Univer-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0} |
page_content='sity. Correspondence to: Zichang Liu <zl71@rice.edu>, Tri Dao\n<trid@stanford.edu>, Tianyi Zhou <t8zhou@ucsd.edu>, Zhao Song\n<zsong@adobe.com>, Beidi Chen <beidic@andrew.cmu.edu>.\nProceedings of the 40thInternational Conference on Machine\nLearning , Honolulu, Hawaii, USA. PMLR 202, 2023. Copyright\n2023 by the author(s).parameters unleashes impressive performance and emergent\nin-context-learning abilities—they can perform a task by\nconditioning on input-output examples, without updating' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0} |
page_content='their parameters (Bommasani et al., 2021; Liang et al.,\n2022; Brown et al., 2020; Min et al., 2022; Chan et al.,\n2022). However, they are very expensive at inference time,\nespecially for latency-sensitive applications (Pope et al.,\n2022). An ideal inference-time model should use less com-\nputation and memory while maintaining the performance\nand special abilities of pre-trained LLMs. The simplest and\nmost natural approach is sparsification or pruning, which' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0} |
page_content='has a long history before the LLM era (LeCun et al., 1989).\nUnfortunately, speeding up inference-time sparse LLMs in\nwall-clock time while maintaining quality and in-context\nlearning abilities remains a challenging problem.\nWhile sparsity and pruning have been well-studied, they\nhave not seen wide adoption on LLMs due to the poor\nquality and efficiency trade-offs on modern hardware such\nas GPUs. First, it is infeasible to retrain or iteratively prune' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0} |
page_content='models at the scale of hundreds of billions of parameters.\nThus, methods in iterative pruning and lottery ticket\nhypothesis (Lee et al., 2018; Frankle & Carbin, 2018) can\nonly be applied to smaller-scale models. Second, it is\nchallenging to find sparsity that preserves the in-context\nlearning ability of LLMs. Many works have shown the\neffectiveness of task-dependent pruning (Michel et al., 2019;\nBansal et al., 2022), but maintaining different models for' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0} |
page_content='each task conflicts with the task independence goal of LLMs.\nLastly, it is hard to achieve wall-clock time speed-up with\nunstructured sparsity due to its well-known difficulty with\nmodern hardware (Hooker, 2021). For example, recent\ndevelopment in zero-shot pruning like SparseGPT (Frantar\n& Alistarh, 2023) finds 60% unstructured sparsity but does\nnot yet lead to any wall-clock time speedup.\nAn ideal sparsity for LLMs should (i) not require model' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0} |
page_content='retraining, (ii) preserve quality and in-context learning\nability, and (iii) lead to speed-up in wall-clock time on\nmodern hardware. To achieve such demanding requirements,\nwe go beyond static sparsity in previous works (e.g., struc-\ntured/unstructured weight pruning). We instead envision\ncontextual sparsity , which are small, input-dependent\nsets of attention heads and MLP parameters that lead to\n(approximately) the same output as the full model for an' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0} |
page_content='input. Inspired by the connections between LLMs, Hidden\n1arXiv:2310.17157v1 [cs.LG] 26 Oct 2023' metadata={'source': 'pdfs/paper_3.pdf', 'page': 0} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\n0 20 40 60 80\nTransformer Layer 0.00 0.20 0.40 0.60 0.80 1.00Contextual SparsityOPT-175B\n(a) Contextual Sparsity\n1 2 3 4 5 6 7 8\nTheoretical Reduction0.7940.7960.7980.8000.8020.8040.8060.8080.810Accuracy\nStatic Sparsity\nNon-Contextual Sparsity\nContextual Sparsity\n(b) Accuracy-Efficiency Trade-offs\nFigure 1. (1) LLMs have up to 85% contextual sparsity for a given\ninput. (2) Contextual sparsity has much better efficiency-accuracy' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1} |
page_content='trade-offs (up to 7 ×) than non-contextual sparsity or static sparsity.\nMarkov Models (Xie et al., 2022; Baum & Petrie, 1966), and\nthe classic Viterbi algorithm (Viterbi, 1967), we hypothesize\nthat for pre-trained LLMs,\ncontextual sparsity exists given any input.\nThe hypothesis, if true, would enable us to cut off specific\nattention heads and MLP parameters (structured sparsity)\non the fly for inference-time, without modifying pre-trained\nmodels. However, there are three challenges.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1} |
page_content='Existence : It is nontrivial to verify if such contextual sparsity\nexists, and naive verification can be prohibitively expensive.\nPrediction : Even if contextual sparsity exists, it is challeng-\ning to predict the sparsity for a given input in advance.\nEfficiency : Even if the sparsity can be predicted, it might\nbe difficult to achieve end-to-end wall-clock time speedup.\nTaking OPT-175B as an example, the latency of one MLP\nblock is only 0.2 ms on an 8 ×A100 80GB machine. Without' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1} |
page_content='a fast prediction and optimized implementation, the overhead\ncan easily increase the LLM latency rather than reduce it.\nIn this work, we address these challenges as follows:\nExistence : Fortunately, we verify the existence of contextual\nsparsity with a surprisingly simple approach. To achieve\nessentially the same output, contextual sparsity is on average\n85% structured sparse and thereby potentially leads to a 7×\nparameter reduction for each specific input while maintain-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1} |
page_content='ing accuracy (Figure 1(a)). During explorations of contextual\nsparsity, we make important empirical observations and build\na theoretical understanding of major components in LLMs\nthat help address the prediction and efficiency challenge.\nDeja Vu\nAttentionkMLPkPredictorPredictorPredictorAttentionk+1……Figure 2. DEJAVU uses lookahead predictors to side-step prediction\ncosts: given the input to the attention layer at block k, they (asyn-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1} |
page_content='chronously) predict the contextual sparsity for the MLP at block k,\nand given the input to the MLP at block k, they predict the sparsity\nfor the attention head at the next layer.\nPrediction : We discover that contextual sparsity depends\nnot only on individual input tokens (i.e., non-contextual\ndynamic sparsity) but also on their interactions ( contextual\ndynamic sparsity). Figure 1(b) shows that with pure dynamic\ninformation, sparsity prediction is inaccurate. Only with' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1} |
page_content='token embeddings with sufficient contextual information\ncan we predict sparsity accurately. Another finding is that\ncontextual dynamic sparsity for every layer can be predicted\nbased on the “similarity” between layer parameters (head-\ns/MLP) and the output from the previous layer, which carries\nthe immediate contextual mixture of token embeddings.\nEfficiency : Because at inference time, model parameters are\nstatic, inspired by the classical nearest neighbor search (NNS)' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1} |
page_content='literature and its applications in efficient deep learning, it is\npossible to formulate the above similarity-based prediction\nas an NNS problem (Indyk & Motwani, 1998b; Zhang et al.,\n2018; Chen et al., 2020a). However, as mentioned, the over-\nhead might be difficult to overcome as we would need to\nperform on-the-fly predictions before every layer. Luckily,\nwe exploit a phenomenon of LLM where token embeddings\nchange slowly across layers due to residual connections (well-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1} |
page_content='known in computer vision (He et al., 2016)). Since the inputs\nto a few consecutive layers are very similar, we can design\nan asynchronous lookahead predictor (Figure 2).\nBased on our findings, we present a system, DEJAVU , that\nexploits contextual sparsity and realizes efficient LLMs for\nlatency-sensitive applications.\n•In Section 4.1 and Section 4.2, we present a low-cost\nlearning-based algorithm to predict sparsity on the fly.\nGiven the input to a specific layer, it predicts a relevant' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1} |
page_content='subset of attention (heads) or MLP parameters in the next\nlayer and only loads them for the computation.\n•In Section 4.3, we propose an asynchronous predictor (simi-\nlar to classic branch predictor (Smith, 1998)) to avoid the se-\nquential overhead. A theoretical guarantee justifies that the\n2' metadata={'source': 'pdfs/paper_3.pdf', 'page': 1} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\ncross-layer design suffices for accurate sparsity prediction.\nAfter integrating hardware-aware implementation of sparse\nmatrix multiply (Section 4.4), DEJAVU (written mostly in\nPython) can reduce latency of open-source LLMs such\nas OPT-175B by over 2 ×end-to-end without quality\ndegradation compared to the state-of-the-art library Faster-\nTransformer from Nvidia (written entirely in C++/CUDA),' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2} |
page_content='and over 2 ×compared to the widely used Hugging Face\nimplementation at small batch sizes. Furthermore, we show\nseveral ablations on different components of DEJAVU and\nits compatibility with quantization techniques.\n2 Related Work and Problem Formulation\nWe first briefly discuss the rich literature on efficient\ninference. Then, we introduce the latency breakdown in our\nsetting. Last, we provide a formal problem formulation.\n2.1 Quantization, Pruning, Distillation for Inference' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2} |
page_content='Various relaxations have been studied for decades for\nmodel inference in machine learning. There are three main\ntechniques: quantization (Han et al., 2015; Jacob et al.,\n2018; Nagel et al., 2019; Zhao et al., 2019), pruning or\nsparsity (Molchanov et al., 2016; Liu et al., 2018; Hoefler\net al., 2021), and distillation (Hinton et al., 2015; Tang et al.,\n2019; Touvron et al., 2021). They are orthogonal areas and\nusually excel in different settings. Recently, there is active' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2} |
page_content='research attempting to apply one or a combination of such\ntechniques in LLM inference (Yao et al., 2022; Park et al.,\n2022; Dettmers et al., 2022; Frantar et al., 2022; Frantar &\nAlistarh, 2023; Bansal et al., 2022; Xiao et al., 2022). More\ndiscussion is presented in Appendix A.\n2.2 LLM Inference Latency Breakdown\nThe generative procedure of LLMs consists of two phases: (i)\ntheprompt phase takes an input sequence to generate the keys\nand values (KV cache) for each transformer block of LLMs,' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2} |
page_content='which is similar to the forwarding pass of LLMs training;\nand (ii) the token generation phase utilizes and updates the\nKV cache to generate tokens step by step, where the current\ntoken generation depends on previously generated tokens.\nThis paper studies the setting where the token generation\nphase easily dominates the end-to-end inference time. As\nshown in Table 1, generating a sequence of length 128 takes\nmuch longer time than processing a sequence of length 128' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2} |
page_content='as prompt due to I/O latency of loading model parameters.\nIn addition, Table 2 shows that attention and MLP are both\nbottlenecks in LLMs, e.g., in 175B models, loading MLP\nparameters takes around2\n3of the total I/O and attention\nheads take the other1\n3. Further, in the tensor-parallel regime,\nthere are two communications between GPUs, one after\nthe attention block, and the other one after the MLP block.\nAs shown in Table 3, communication between GPUs takes' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2} |
page_content='around 15 % token generation latency. This paper focuses on\nmaking attention and MLP more efficient. Communicationcost implies that the upper bound of such speed-up is around\n6×when skipping all transformer blocks.\nTable 1. Theoretical breakdown for prompting versus token genera-\ntion (tensor model parallelism on 8 A100-80G GPUs).\nTFLOPs I/O Compute Latency (ms) I/O Latency (ms)\nPrompting 128 44.6 330 GB 17.87 20.6\nToken Generation 128 44.6 41 TB 17.87 2600' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2} |
page_content='Table 2. Theoretical breakdown for Attention block versus MLP\nblock in one transformer layer when generating one token (tensor\nmodel parallelism on 8 A100-80G GPUs).\nGFLOPs I/O (GB) Compute Latency (ms) I/O Latency (ms)\nAttention Block 1.21 1.12 0.00048 0.07\nMLP Block 2.41 2.25 0.00096 0.14\nTable 3. Latency breakdown of generating 1 token under the setting\nof batch size 1 and prompt length 128 on 8 A100-80GB.\nAll Reduce MLP Block Attention Block (ms) Others\n6 ms 19ms 13ms 2ms' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2} |
page_content='2.3 Problem Formulation\nThe goal is to reduce the generation latency of LLMs by\nexploiting contextual sparsity. In the following, we formally\ndefine the sparsified attention and MLP blocks.\nSparsified MLP: There are two linear layers in one MLP\nblock, W1,W2∈Rd×4d. Denote y∈R1×das the input to the\nMLP block in the current generation step. Let each column\n(the weight of i-th neuron) of linear layers be W1\ni,W2\ni∈\nRd×1. With contextual sparsity, only a small set of them are' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2} |
page_content='required for computation. Let SM⊆[4d]denote such set of\nneurons for input y. The sparsified MLP computation is\nMLP SM(y)=σ(yW1\nSM)(W2\nSM)⊤, (1)\nwhere σis the activation function, e.g., ReLU, GeLU. Note\nthat since the computation in the first linear results in sparse\nactivations, the second linear layer is also sparsified.\nSparsified Attention: LetX∈Rn×ddenote the embeddings\nof all tokens (e.g., prompts and previously generated tokens).\nLety∈R1×dbe the input to the Multi-Head-Attention' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2} |
page_content='(MHA) in the current generation step. Suppose there are h\nheads. For each i∈[h], we use WK\ni,WQ\ni,WV\ni∈Rd×dhto\ndenote key, query, value projections for the i-th head, and\nWO\ni∈Rdh×dfor output projections. With contextual spar-\nsity, we denote SAas a small set of attention heads leading to\napproximately the same output as the full attention for input\ny. Following the notation system in (Alman & Song, 2023),\nsparsified MHA computation can be formally written as\nMHA SA(y)=X\ni∈SAHi(y)|{z}\n1×dhWO' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2} |
page_content='i|{z}\ndh×d,\nwhere Hi(y):Rd→RdhandDi(y)∈Rcan be written as\nHi(y):=Di(y)−1exp(yWQ\ni(WK\ni)⊤X⊤)XWV\ni,(2)\nDi(y):=exp( yWQ\ni(WK\ni)⊤X⊤)1n.\nFor both MLP and Attention, given a compute budget, the\ngoal is to find SMandSAthat minimize the error between\nthe sparse approximation and full computation.\n3' metadata={'source': 'pdfs/paper_3.pdf', 'page': 2} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\n0 20 40 60 80\nTransformer Layer20%40%60%80%100%% Not Activated HeadOPT-30B\nOPT-66B\nOPT-175B\n(a) Contextual sparsity in Attention Head\n0 20 40 60 80\nTransformer Layer90%92%94%96%98%100%% Not Activated NeuronsOPT-30B\nOPT-66B\nOPT-175B\n(b) Contextual sparsity in MLP Block\nFigure 3. In Figure (a), we plot the percentage of not-activated\nattention heads. By only keeping heads that yield large output' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3} |
page_content='norms, we can silence over 80% attention heads for a given token.\nIn Figure (b), we plot the average sparsity we impose on MLP layers.\nWe can zero out over 95% of MLP parameters for a given token.\n3Pre-trained LLMs are Contextually Sparse\nIn this section, we present several key observations and the-\noretical understandings of sparsity in LLMs, upon which the\nDEJAVU design is based. We first test the contextual sparsity\nhypothesis and verify that contextual sparsity exists in pre-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3} |
page_content='trained LLMs in Section 3.1. Then, we build an understand-\ning of why contextual sparsity happens naturally even when\nLLMs are densely trained in Section 3.2. Finally, we present\nan observation on residual connections and explain their\nrelationship to contextual sparsity analytically in Section 3.3.\n3.1 Contextual Sparsity Hypothesis\nInspired by prior pruning literature (Molchanov et al., 2016),\nwe find a surprisingly simple method is sufficient to study and' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3} |
page_content='verify our hypothesis. In this section, we describe the testing\nprocedure, observation details, and insights of this study.\nVerification: Our test is performed on OPT-175B, 66B, and\n30B models and various downstream datasets such as Open-\nBookQA (Mihaylov et al., 2018) and Wiki-Text (Merity et al.,\n2016). We find the contextual sparsity for every input exam-\nple with two forward passes of the model. In the first pass, we\nrecord a subset of parameters, specifically which attention' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3} |
page_content='heads and MLP neurons yield large output norms for the input.\nIn the second pass, each input example only uses the recorded\nsubset of parameters for the computation. Surprisingly, these\ntwo forward passes lead to similar prediction or performance\non all in-context learning and language modeling tasks.Observation: Figure 3 shows that on average, we can impose\nup to 80% sparsity on attention heads and 95% sparsity on\nMLP neurons. As mentioned in Section 2, OPT-175B model' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3} |
page_content='has2×MLP parameters than those of attention blocks.\nTherefore total sparsity here is around 85%. Since these are\nall structured sparsity (heads and neurons), predicting them\naccurately could potentially lead to 7×speedup.\nInsight: It is intuitive that we can find contextual sparsity in\nMLP blocks at inference time because of their activation func-\ntions, e.g., ReLU or GeLU (Kurtz et al., 2020). Similar obser-\nvations were made by (Li et al., 2022). However, it is surpris-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3} |
page_content='ing that we can find contextual sparsity in attention layers.\nNote that, finding contextual sparsity in attention is not the\nsame as head pruning. We cross-check that different exam-\nples have different contextual sparsity. Although 80% of the\nparameters are not included in the paths for a given example,\nthey might be used by other examples. Next, we will try to\nunderstand why contextual sparsity exists in attention blocks.\n3.2 Token Clustering in Attention Layers' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3} |
page_content='In the previous section, we have verified that there exists\ncontextual sparsity for a given input in LLMs. In this\nsection, we try to understand the reason for such phenomena,\nespecially in attention layers. We first show an in-depth\nobservation of attention. Then we present a hypothesis that\nself-attentions are conceptually clustering algorithms. Last\nwe show analytical evidence to support this hypothesis.\nObservation: Figure 4 shows the attention map of three' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3} |
page_content='different heads from the same layer for an example input.\nThe next token it should predict is “Truck”. Darker color\nrepresents higher attention scores. We observe that the\nmiddle head is a relatively uniform token-mixing head\nwhile the top and bottom ones are “heavy hitter” attention\nheads (with high attention to “like” and “shipping”).\nUnsurprisingly, only selecting heavy hitter heads but not\nuniform heads does not affect the prediction, since uniform' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3} |
page_content='heads do not model or encode important token interactions.\nIn the next section, we will also explain in detail how the\ncriteria for selecting uniform attention heads and heads with\nsmall output norms are highly correlated.\nHypothesis: We hypothesize that the attention head is\nperforming mean-shift clustering (Derpanis, 2005).\nRecall the notation defined in Section 2.3. For i-th head\nat current layer, X= [x1,...,x n]⊤∈Rn×dare the token\nembeddings in the previous time steps. XWK\niandXWV\ni' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3} |
page_content='are the projection of embedding. For an input embedding\ny, the output ˜yi=Hi(y), where Hi(y)is defined in Eq. 2.\nFor each i∈[h], if we let Ki(xj,y):=exp( yWQ\ni(WK\ni)⊤xj)\nmeasure the similarity between xjandy, and define\nmi(y):=P\njKi(xj,y)xjP\njKi(xj,y), then we have ˜yi=mi(y)WV\ni. Fur-\nther, if we set WV\ni=Iand consider the residue connection\nfollowed by layer norm, then in the next layer, the embedding\n4' metadata={'source': 'pdfs/paper_3.pdf', 'page': 3} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nLayer LThis fruit shipping company provide different vehicle options like car and [MASK]Truck\nFigure 4. We visualize the attention scores of three different heads\nfor an exemplary sentence. Head 42 and Head 44 give heavy atten-\ntion scores on particular tokens while Head 43 is more uniform.\nˆyiof the current token becomes ˆyi= Normalize( y+ ˜yi) =\nNormalize( y+mi(y)), which has a fixed point y=γmi(y)' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4} |
page_content='for any scalar γ. This iteration bears a resemblance to mean-\nshift clustering, which simply performs iteration y←mi(y)\nuntil convergence. This has an obvious fixed point y=mi(y).\nTherefore, the self-attention head can be regarded as one\nmean-shift step to push input embeddings of different tokens\ntogether, if they are already neighbors in a projection space\nspecified by WQ\ni(WK\ni)⊤. Different heads learn different\nprojection spaces to perform clustering. These dynamics' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4} |
page_content='explain the precise reason why token embeddings tend to\ncluster after going through more layers, resulting in high\nattention scores among cluster members, and low scores for\nnon-members. Furthermore, the cluster patterns are different\nat different heads (More details in Appendix K).\nThe above analysis not only provides an understanding of\nwhy contextual sparsity exists naturally in pre-trained LLMs,\nbut also inspires our design of “similarity”-based sparsity\nprediction for DEJAVU in Section 4.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4} |
page_content='3.3 Slowly Changing Embeddings across Layers\nWe first present our observation that embeddings change\nslowly across consecutive layers. Then we provide a detailed\nanalysis on the phenomenon. Finally, we show its close\nconnection with contextual sparsity. Details are in Section B.\nHigh similar embeddings in consecutive layers: In\nFigure 5(a), we show that for the same given input, the cosine\nsimilarity between embeddings or activations in two consec-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4} |
page_content='utive layers is exceptionally high on 7 different sizes of OPT\nmodels. Specifically, we collect activations from each layer\nwhile performing OPT model inference on C4 validation\nset (Raffel et al., 2019). Taking OPT-175B as an example,\nstarting from the second layer, the similarity between any\ntwo consecutive layers is around 0.99, which indicates that\nwhen an input is passed through the model, the direction of\nits embedding changes slowly. Interestingly, the most drastic' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4} |
page_content='change happens in the first layer. Furthermore, we increase\nthe gap and investigate the similarity between the embedding\nat layer land at layer l+nshown in Figure 5(b). As we\nincrease the gap, the similarity decreases as expected while\nthe differences in cosine similarity between various choices\n125m1.3b 6.7b 13b 30b 66b175b0.950.960.970.980.991.00Cosine Similarity\n(a) Model Comparison\n0 20 40 60 80\nTransformer Layer0.00.20.40.60.81.0Cosine Similarityn = 1\nn = 2\nn = 4\nn = 8 (b) Across Layer' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4} |
page_content='0 20 40 60 80\nTransformer Layer0500100015002000Norm||X||\n||F(X)||\n(c) Residual Around Attention\n0 20 40 60 80\nTransformer Layer05001000150020002500Norm||X||\n||F(X)|| (d) Residual Around MLP\nFigure 5. Slowly Changing Embedding. Figure (a) shows the\nmedian cosine similarity between representations at two consecutive\nlayers across all layers for different OPT models. All models show\na similarity greater than 95%. Figure (b) shows cosine similarity' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4} |
page_content='stays high even a few layers apart. For the residual connection\nX′=X+F(X)inside each block, we plot the ℓ2norm of Xand\nF(X)in Figure (c) and Figure (d). ∥X∥is significantly higher than\n∥F(X)∥, which explains the slowly changing embedding.\nofnare smaller at the shallower layer. We plot the mean sim-\nilarity, and the standard deviation is indicated by the shading.\nSimilar plots on more models are presented in Appendix B.\nConnection to residuals: We verify that the high similarity' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4} |
page_content='in embeddings in LLM inference is due to the residual\nconnection. We first dissect the computation graph inside\neach transformer layer to understand the cause behind\nthis phenomenon. There are two residual connections\ninside a transformer layer, one around the attention block,\nand the other one around the MLP block. The residual\nconnection can be written as X+F(X), where Fis either\nthe Multi-Head Attention or two MLP Layers. In Figure 5(c)' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4} |
page_content='and Figure 5(d), indeed we can see that ∥X∥is significantly\ngreater than ∥F(X)∥, confirming that embeddings are\nchanging slowly because the residual norm is large.\nConnection to Contextual Sparsity: We take a step deeper\ntrying to understand the reason behind the large residual\nnorm with mathematical modeling. We discover that one pos-\nsible reason for small ∥F(X)∥is due to high sparsity. For the\nMLP Block, high sparsity may contribute to the small norm' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4} |
page_content='ofF(X)because a large portion of outputs have small norms.\nSimilar reasoning applies to the Attention Block, and thus\na large number of attention heads yield small norm outputs.\nResidual Two Sides Bound: Besides empirical reasoning,\n5' metadata={'source': 'pdfs/paper_3.pdf', 'page': 4} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nwe formally define the computation of LLMs mathematically.\nUnder our computation model, we can show that a shrinking\nproperty which is observed by our practical experiments.\nProofs are in Appendix G, H, I.\nLemma 3.1 (Informal) .Let0<ϵ1<ϵ2<1be the lower and\nupper bound of the shrinking factor. Let xbe the ybe the\noutput. We have the residual connection y=x+F(x). For\nthe MLP block F(x), we have ϵ1≤ ∥y−x∥2≤ϵ2. For the' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5} |
page_content='attention block F(x), we have ϵ1≤∥y−x∥2≤ϵ2.\n4 DEJAVU\nIn this section, we present our framework for inference-time\ncontextual sparsity search for LLMs. We introduce the\nsparsity predictor for MLPs in Section 4.1 and for attention\nheads in Section 4.2. DEJAVU ’s workflow is shown in\nFigure 2. Section 4.3 discusses exploiting our observation\non LLMs to avoid the sparse prediction overhead with\ntheoretical guarantees. In Section 4.4, we present our' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5} |
page_content='optimized implementation that enables end-to-end latency\nreduction. More details are presented in Section D.\n4.1 Contextual Sparsity Prediction in MLP Blocks\nAs explained in Section 2, MLP blocks are one of the major\nbottlenecks for the LLM generation (2\n3of the FLOPs and\nIOs). In this section, we discuss how we achieve wall-clock\ntime speed-up with contextual sparsity in the MLP blocks.\nChallenge Figure 3(b) shows that for a given token, the' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5} |
page_content='contextual sparsity of 95% is possible. The contextual\nsparsity in the MLP block can be identified after computing\nthe activation. However, this only demonstrates the existence\nof contextual sparsity but brings no benefits in terms of\nefficiency. A fast and precise prediction is needed to exploit\ncontextual sparsity for end-to-end efficiency. The naive way\nis to select a subset of neurons randomly. Unsurprisingly,\nrandom selection fails to identify the accurate contextual' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5} |
page_content='sparsity, resulting in drastic model degradation.\nA Near-Neighbor Search Problem: Recall that we verify\nthe existence of contextual sparsity by recording which\nneurons yield significant norms. Essentially, given the input,\nthe goal is to search for the neurons that have high inner prod-\nucts with the input, because the activation function “filters"\nlow activation. Thus, we formulate the contextual sparsity\nprediction of an MLP layer as the classical near-neighbor' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5} |
page_content='search problem under the inner product metric.\nDefinition 4.1 (Approximate MaxIP in MLP) .Letc∈(0,1)\nandτ∈(0,1)denote two parameters. Given an n-vector\ndataset W1⊂Sd−1on a unit sphere, the objective of the\n(c,τ)-MaxIP is to construct a data structure that, given a\nquery y∈Sd−1such that max w∈W1⟨y,w⟩≥τ, it retrieves a\nvector zfromW1that satisfies ⟨y,z⟩≥c·max w∈W1⟨y,w⟩.\nRemark 4.2.OurW1(first linear layer) and y(input\nembedding) in MLP blocks can be viewed as the dataset and' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5} |
page_content='query in Definition 4.1 respectively.Design The standard state-of-the-art near-neighbor search\nmethods and implementations slow down the computa-\ntion. Take OPT-175B where dis 12288 as an example.\nHNSW (Malkov & Yashunin, 2018) requires more than 10ms,\nand FAISS (Johnson et al., 2019) requires more than 4ms,\nwhile the MLP computation is only 0.2ms. The high dimen-\nsionality and complications of data structure implementation\non GPU make the search time longer than the MLP computa-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5} |
page_content='tion. Therefore, we choose a neural network classifier as our\nnear-neighbor search method to exploit the fast matrix multi-\nplication on GPU. For each MLP block, we train a small two-\nlayer fully connected network to predict contextual sparsity.\nCollecting training data is straightforward because we know\nthe contextual sparsity using dense computation. The training\nalgorithm is summarized in Algorithm 1. The sparsified com-\nputation in W1has two steps: (1) Given y, the sparsity predic-' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5} |
page_content='torSPMpredicts a set SMof important neurons in weights\nW1. (2) Compute the sparsified MLP defined in Eq. equa-\ntion 1. Note here the sparsity in MLP is highly structured.\nAlgorithm 1 Sparse Predictor Training\nInput : A pre-trained LLM block with parameter set M,\ntoken embedding set at block M={xi}i∈[N], threshold t\nSparse Predictor SP\nP+←∅,P−←∅\nfori=1→Ndo\nP+←P +∪{(xi,mr)|mr∈M,m r(xi)≥t}\nP−←P−∪{(xi,mr)|mr∈M,m r(xi)<t}\nend for\nSP ← TRAIN(P+,P−,L) ▷Lis a loss function' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5} |
page_content='4.2 Contextual Sparsity Prediction in Attention Blocks\nAttention blocks take around 30% I/Os in the generation. In\nthis section, we describe how DEJAVU exploits contextual\nsparsity to speed up the Attention blocks.\nChallenge: As discussed in Section 3.1, only a few heads per-\nform important computations for a given input token. Similar\nto the MLP blocks, a fast selection of attention heads without\nfull computation is required to reduce end-to-end latency.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5} |
page_content='Furthermore, one particular challenge of sparse prediction in\nattention blocks is attention’s dependence on previous tokens.\nOn the one hand, it is unclear whether the past token’s key and\nvalue caches are needed for sparse prediction. On the other\nhand, it is unclear how to handle the missing KV cache of past\ntokens for the current token computation at the selected head.\nA Near-Neighbor Search Problem: Head prediction\ncan also be formulated as a near-neighbor search problem' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5} |
page_content='based on our understanding in Section 3.2. Since each\nhead is performing mean-shift clustering, after the first\nfew layers, the current token embedding alone is sufficient\nfor the prediction thanks to the token-mixing nature of the\ntransformer. Therefore, the prediction can be based on the\nsimilarity between yand head parameters.\n6' metadata={'source': 'pdfs/paper_3.pdf', 'page': 5} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nApproach: We design our attention sparse predictor to be\nthe same architecture as the MLP sparse predictor. Each head\nis regarded as one class and a similar training process is used\n(Algorithm 1). Then, similar to how MLP prediction is per-\nformed, the attention sparsity predictor SPAselects a set SA\nof heads Hi(see Eq. equation 2). To address the problem of\nmissing KV cache for a past token, we exploit the fact that the' metadata={'source': 'pdfs/paper_3.pdf', 'page': 6} |
page_content='generation latency is I/O bounded while computation is essen-\ntially “free". Specifically, for the predicted attention head of\ninput y, we compute the corresponding keys, and values and\nstore them in the KV cache. But we also save a copy of yfor\nall the other non-selected heads. Then during the future token\ngeneration, if there is missing KV cache in the selected heads,\nwe could load stored token embeddings and compute the\nkeys and values together. This requires almost minimal extra' metadata={'source': 'pdfs/paper_3.pdf', 'page': 6} |
page_content='memory access (the main cost is loading the weight matrices).\n4.3 Reducing Overhead with Asynchronous Execution\nSparse prediction overhead may easily increase the end-to-\nend latency rather than reduce it despite the reduction in\nFLOPs. Therefore, we introduce a look-ahead sparse pre-\ndiction method, inspired by our observations in Section 3.3.\nChallenge: Denote yl∈Rdas the input to transformer\nlayer l. We can write the computation at layer laseyl←\nMHAl(yl),byl←MLPl(eyl). With predictors SPl' metadata={'source': 'pdfs/paper_3.pdf', 'page': 6} |
page_content='AandSPl\nM,\nthe computation at the transformer layer lcan be re-written as\nSl\nA←SPl\nA(yl),eyl←MHAl\nSl\nA(yl),\nSl\nM←SPl\nM(eyl),byl←MLPl\nSl\nM(eyl)\nwhere set Sl\nAis the contextual sparsity for the Attention\nblock, and set Sl\nMis the contextual sparsity for the MLP\nblock at l-th layer. Note that the computation at Attention\nand MLP blocks have to wait for the sparse predictor\ndecision. This overhead potentially outweighs the saving\nfrom Attention and MLP blocks in terms of latency.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 6} |
page_content='Approach: In Section 3.3, we present the slowly evolving\nembedding phenomenon, which provides opportunities to\nrelax the sequential computation to parallel computation.\nAlong with the observation of low computation intensity\nduring generation, we parallel the sparse prediction with the\ncomputation of each block ( See Figure 2). The computation\ncan be written as follows:\neyl←MHAl\nSl\nA(yl),byl←MLPl\nSl\nM(eyl),\nSl+1\nA←SPl\nA(yl), Sl+1\nM←SPl\nM(yl),\nWe remark Sl+1\nAandSl+1' metadata={'source': 'pdfs/paper_3.pdf', 'page': 6} |
page_content='Mcan be computed in parallel with\neylorbyl, while the previous 4 steps are sequential.\nTheoretical guarantee: The sparse predictor can make fur-\nther cross-layer decisions because of the residual connection.\nWe present an informal lemma statement regarding cross-\nlayer prediction. It is well-known that MaxIP is equivalent to\nℓ2nearest neighbor search. For convenience, we use MaxIP\nhere. We include more discussions and proofs in Section J.' metadata={'source': 'pdfs/paper_3.pdf', 'page': 6} |
page_content='Lemma 4.3 (Informal) .Letϵ∈(0,1). Let ylbe input atl-th layer. Let yl−1be the input at (l−1)-th layer. Suppose\nthat∥yl−yl−1∥2≤ϵ. For any parameters c,τsuch that\nϵ < O (cτ). Then we can show that, solving MaxIP (c,τ)is\nsufficient to solve MaxIP (0.99c,τ).\n4.4 Hardware-efficient Implementation\nWe describe how DEJAVU is implemented in a hardware-\nefficient manner to realize the theoretical speedup of contex-\ntual sparsity. Taking into account hardware characteristics' metadata={'source': 'pdfs/paper_3.pdf', 'page': 6} |
page_content='leads to over 2 ×speedup compared to an optimized dense\nmodel, and 4 ×faster than a standard sparse implementation.\nWe highlight some hardware characteristics of GPUs:\n•Small-batch generation is bottlenecked by GPU memory\nI/Os (NVIDIA, 2022; Ivanov et al., 2021; Dao et al., 2022).\nThis is because of low arithmetic intensity. For each\nelement loaded from GPU memory, only a small number\nof floating point operations are performed.\n•GPUs are block-oriented devices: loading a single byte of' metadata={'source': 'pdfs/paper_3.pdf', 'page': 6} |
page_content='memory takes the same time as loading a block of memory\naround that same address (Harris, 2013). The block size\nis usually 128 bytes for NVIDIA GPUs (Cook, 2012).\nThese characteristics present some challenges in implement-\ning contextual sparsity. However, they can be addressed with\nclassical techniques in GPU programming.\nKernel fusion: A standard implementation of sparse\nmatrix-vector multiply (e.g., in PyTorch) that separately\nindexes a subset of the matrix W1\nSMbefore multiplying' metadata={'source': 'pdfs/paper_3.pdf', 'page': 6} |
page_content='with input ywould incur 3 ×the amount of memory I/Os.\nTherefore, to avoid such overhead, we fuse the indexing\nand the multiplication step. Specifically, we load a subset\nofW1\nSMto memory, along with y, perform the multiply,\nthen write down the result. This fused implementation (in\nTriton (Tillet et al., 2019)) yields up to 4 ×speedup compared\nto a standard PyTorch implementation (Appendix E).\nMemory coalescing: In the dense implementation, the' metadata={'source': 'pdfs/paper_3.pdf', 'page': 6} |
page_content='weight matrices of two linear layers in MLP are stored\nas(W1)⊤andW2so that no extra transpose operation is\nneeded. They are conventionally stored in row-major format.\nIn the sparse implementation, it allows us to load (W1\nSM)⊤\noptimally (the second dimension is contiguous in memory).\nHowever, for cases where we need to load (W2\nSM), this\nformat significantly slows down memory loading, as indices\ninSMpoint to non-contiguous memory. We simply store' metadata={'source': 'pdfs/paper_3.pdf', 'page': 6} |
page_content='these matrices in column-major format (i.e., store (W2)⊤\nin row-major format), then use the same fused kernel above.\nSimilarly, in attention blocks, we store attention output\nprojection WOcolumn-major format.\nThese two techniques (kernel fusion and memory-\ncoalescing) make DEJAVU hardware-efficient, yielding up\nto 2×speedup end-to-end compared to the state-of-the-art\nFasterTransformer (Section 5.1).\n7' metadata={'source': 'pdfs/paper_3.pdf', 'page': 6} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\n0 20 40 60 80 100\nSparsity percentage (%)891011 PerplexityWikiText\nc4\n(a) Language Modeling\n020 40 60 80100\nSparsity percentage (%)0.40.50.60.70.80.9Accuracy\n020 40 60 80100\nSparsity percentage (%)0.40.50.60.70.80.9\nCB\nCOPA\nLambada\nOpenbookQA\nPIQA\nRTE\nWinogrande (b) Zero-Shot(Left). Five-Shot(Right)\nFigure 6. Accuracy Trend for DEJAVU -OPT-175B . This figure shows the accuracy of DEJAVU -OPT-175B on language modeling datasets' metadata={'source': 'pdfs/paper_3.pdf', 'page': 7} |
page_content='and downstream tasks when we set different sparsity at test time. In general, DEJAVU -OPT-175B incurs no accuracy drop until 75% sparsity.\n128 256 512 1024\nSequence Length020406080100Latency(ms)HuggingFace\nFasterTransformer\nDejaVu\nFigure 7. Average per-token latency (ms) with batch size 1 on 8\nA100-80GB with NVLink when generating sequences with prompt\nlengths 128, 256, 512, and 1024, using FP16. DEJAVU speeds up\ngeneration by 1.8-2 ×compared to the state-of-the-art FT and by' metadata={'source': 'pdfs/paper_3.pdf', 'page': 7} |
page_content='4.8-6×compared to the widely used HF implementation.\n5 Empirical Evaluation\nIn Section 5.1, we present the end-to-end results that show\nDEJAVU achieves over 2 ×reduction in token generation\nlatency compared to the state-of-the-art FasterTransformer\nand over 6 ×compared to Hugging Face with no accuracy\nloss. In Section 5.2, we perform a list of ablation studies such\nas independent evaluation on the inference-time contextual\nsparsity of the MLP block and the Attention block (Details' metadata={'source': 'pdfs/paper_3.pdf', 'page': 7} |
page_content='are presented in Section C). At last, we present the additional\nresults to demonstrate the future possibility of sparsifying\nthe entire LLMs via layer skipping in Section C.3.\n5.1 End-to-End Result\nExperiment Setting: We compare the accuracy of DE-\nJAVU -OPT against the original OPT model on two lan-\nguage modeling datasets Wiki-Text (Merity et al., 2016)\nand C4 (Raffel et al., 2019) and seven few-shot downstream\ntasks: CB (de Marneffe et al., 2019), COPA (Gordon et al.,' metadata={'source': 'pdfs/paper_3.pdf', 'page': 7} |
page_content='2012), Lambada (Radford et al., 2019), OpenBookQA (Mi-\nhaylov et al., 2018), PIQA (Bisk et al., 2020), RTE (Giampic-\ncolo et al., 2007), Winogrande (ai2, 2019). We use lm-eval-\nharness (Gao et al., 2021) for zero-shot and five-shot tasks.\nWe collect training data for the sparsity predictor using 500\nrandom data points from the C4 training dataset. Our exper-\niments are conducted on NVIDIA A100 80GB GPU servers.\nNo accuracy drop until 75% sparsity: In Figure 6, we' metadata={'source': 'pdfs/paper_3.pdf', 'page': 7} |
page_content='present DEJAVU -OPT-175B’s accuracy trend. In a zero-shotsetting, the average accuracy across tasks does not drop\nuntil 75% sparsity. A similar trend can be observed for\nthe five-shot setting, which verifies the model’s ability for\nin-context learning. This result is exceptionally encouraging\ngiven our observation in Figure 1(a), where we could impose\n85% sparsity when allowed full computation.\nOver 2 ×latency reduction: Figure 7 presents the latency' metadata={'source': 'pdfs/paper_3.pdf', 'page': 7} |
page_content='speed-up for the token generation with OPT-175B at batch\nsize 1, where DEJAVU achieves the best performance. At\naround 75% sparsity, DEJAVU speeds up generation by\n1.8-2×compared to the state-of-the-art FasterTransformers\n(FT)1and by 4.8-6 ×to Hugging Face (HF) implementation2.\n5.2 Ablation Results\nContextual Sparsity for Larger Batches: Although this\npaper focuses on latency-sensitive settings, we demonstrate\nthat DEJAVU generalizes to larger batches. we present the' metadata={'source': 'pdfs/paper_3.pdf', 'page': 7} |
page_content='Union contextual sparsity (fraction of neurons/heads that are\nnot used by any of the inputs in the batch) of different batches\nsizes for MLP and Attention blocks, respectively, in Fig-\nure 8 and 11. The union operation is essential to realize a fast\nsparse GEMM. Surprisingly the number of MLP neurons and\nAttention heads that DEJAVU activated does not grow linearly\nwith the batch size. This suggests a power law distribution\nrather than a uniform distribution of parameter access from' metadata={'source': 'pdfs/paper_3.pdf', 'page': 7} |
page_content='all input examples. This provides an opportunity for poten-\ntially extending Dejavu to the high-throughout setting. For\nexample, we can first pre-process the inputs and batch similar\ninputs to enjoy a higher level of union contextual sparsity.\nContextual sparsity on MLP blocks: We study the contex-\ntual sparsification of the MLP block in OPT-175B. We leave\nthe Attention block as dense computation. Table 4 shows\nthe model performance at 85% sparsity. The MLP sparse' metadata={'source': 'pdfs/paper_3.pdf', 'page': 7} |
page_content='predictor introduces no accuracy loss on both zero-shot tasks\nand language modeling. In the training of the MLP sparse\npredictor, we observe that the sparse predictor achieves high\nvalidation accuracy. The shallow layer seems easier to model\n1http://github.com/NVIDIA/FasterTransformer\n2http://github.com/huggingface/transformers\n8' metadata={'source': 'pdfs/paper_3.pdf', 'page': 7} |
page_content='Deja Vu: Contextual Sparsity for Efficient LLMs at Inference Time\nTable 4. Accuracy of zero-shot tasks and language modeling when sparsifying the MLP block and the Attention block separately. The\nsparsity is set at 85% for MLP-block and 50% for Attention-block. DEJAVU incurs no accuracy drop across the boards.\nModel CB COPA Lambada OpenBookQA PIQA RTE Winogrande Wikitext C4\nOPT-175B 0.3523 0.86 0.7584 0.446 0.8096 0.6029 0.7261 10.8221 7.7224' metadata={'source': 'pdfs/paper_3.pdf', 'page': 8} |
page_content='DEJAVU -MLP-OPT-175B 0.3544 0.85 0.7619 0.446 0.8096 0.6065 0.7206 10.7988 7.7393\nDEJAVU -Attention-OPT-175B 0.3544 0.86 0.7586 0.4460 0.8063 0.5921 0.7245 10.8696 7.7393\nTable 5. DEJAVU -OPT66B on zero-shot downstream task.\nModel CB COPA Lambada OpenBookQA PIQA RTE Winogrande\nOPT-66B 0.3928 0.87 0.7508 0.426 0.7921 0.6028 0.6890\nDEJAVU -OPT-66B 0.4285 0.87 0.7458 0.434 0.7933 0.5884 0.6898\nTable 6. DEJAVU -BLOOM on zero-shot downstream task.\nCB COPA OpenBookQA PIQA RTE Winogrande Lambada' metadata={'source': 'pdfs/paper_3.pdf', 'page': 8} |