NeuZip: Memory-Efficient Training and Inference with Dynamic Compression of Neural Networks Paper • 2410.20650 • Published 10 days ago • 13
COAT: Compressing Optimizer states and Activation for Memory-Efficient FP8 Training Paper • 2410.19313 • Published 13 days ago • 18
SemiEvol: Semi-supervised Fine-tuning for LLM Adaptation Paper • 2410.14745 • Published 20 days ago • 45
Why Does the Effective Context Length of LLMs Fall Short? Paper • 2410.18745 • Published 13 days ago • 16
MiniPLM: Knowledge Distillation for Pre-Training Language Models Paper • 2410.17215 • Published 15 days ago • 12
Model Merging in LLMs, MLLMs, and Beyond: Methods, Theories, Applications and Opportunities Paper • 2408.07666 • Published Aug 14 • 2
Memory-Efficient LLM Training with Online Subspace Descent Paper • 2408.12857 • Published Aug 23 • 11
Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution Paper • 2409.12191 • Published Sep 18 • 73
Scaling Smart: Accelerating Large Language Model Pre-training with Small Model Initialization Paper • 2409.12903 • Published Sep 19 • 21
SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration Paper • 2410.02367 • Published Oct 3 • 45
MM1.5: Methods, Analysis & Insights from Multimodal LLM Fine-tuning Paper • 2409.20566 • Published Sep 30 • 51
MaskLLM: Learnable Semi-Structured Sparsity for Large Language Models Paper • 2409.17481 • Published Sep 26 • 46
VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models Paper • 2409.17066 • Published Sep 25 • 27