Router-Tuning: A Simple and Effective Approach for Enabling Dynamic-Depth in Transformers Paper • 2410.13184 • Published Oct 17, 2024 • 3
Capacity-Aware Inference: Mitigating the Straggler Effect in Mixture of Experts Paper • 2503.05066 • Published 7 days ago • 3
Capacity-Aware Inference: Mitigating the Straggler Effect in Mixture of Experts Paper • 2503.05066 • Published 7 days ago • 3 • 2
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning Paper • 2501.12948 • Published Jan 22 • 346
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model Paper • 2405.04434 • Published May 7, 2024 • 18
Router-Tuning: A Simple and Effective Approach for Enabling Dynamic-Depth in Transformers Paper • 2410.13184 • Published Oct 17, 2024 • 3