Building Cost-Efficient Enterprise RAG applications with Intel Gaudi 2 and Intel Xeon May 9, 2024 • 12
SQFT: Low-cost Model Adaptation in Low-precision Sparse Foundation Models Paper • 2410.03750 • Published Oct 1, 2024 • 2
Mamba-Shedder: Post-Transformer Compression for Efficient Selective Structured State Space Models Paper • 2501.17088 • Published Jan 28 • 1
SQuARE: Sequential Question Answering Reasoning Engine for Enhanced Chain-of-Thought in Large Language Models Paper • 2502.09390 • Published 28 days ago • 16
Low-Rank Adapters Meet Neural Architecture Search for LLM Compression Paper • 2501.16372 • Published Jan 23 • 9
RAG Foundry: A Framework for Enhancing LLMs for Retrieval Augmented Generation Paper • 2408.02545 • Published Aug 5, 2024 • 37
Inference Performance Optimization for Large Language Models on CPUs Paper • 2407.07304 • Published Jul 10, 2024 • 52
Shears: Unstructured Sparsity with Neural Low-rank Adapter Search Paper • 2404.10934 • Published Apr 16, 2024
A Hardware-Aware Framework for Accelerating Neural Architecture Search Across Modalities Paper • 2205.10358 • Published May 19, 2022
Distributed Speculative Inference of Large Language Models Paper • 2405.14105 • Published May 23, 2024 • 18