Upload abstract/2306.14031.txt with huggingface_hub
Browse files- abstract/2306.14031.txt +1 -0
abstract/2306.14031.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
Compactness in deep learning can be critical to a model's viability in low-resource applications, and a common approach to extreme model compression is quantization. We consider Iterative Product Quantization (iPQ) with Quant-Noise to be state-of-the-art in this area, but this quantization framework suffers from preventable inference quality degradation due to prevalent empty clusters. In this paper, we propose several novel enhancements aiming to improve the accuracy of iPQ with Quant-Noise by focusing on resolving empty clusters. Our contribution, which we call Partitioning-Guided k-means (PG k-means), is a heavily augmented k-means implementation composed of three main components. First, we propose a partitioning-based pre-assignment strategy that ensures no initial empty clusters and encourages an even weight-to-cluster distribution. Second, we propose an empirically superior empty cluster resolution heuristic executed via cautious partitioning of large clusters. Finally, we construct an optional optimization step that consolidates intuitively dense clusters of weights to ensure shared representation. The proposed approach consistently reduces the number of empty clusters in iPQ with Quant-Noise by 100 times on average, uses 8 times fewer iterations during empty cluster resolution, and improves overall model accuracy by up to 12 percent, when applied to RoBERTa on a variety of tasks in the GLUE benchmark.
|