Papers
arxiv:2406.18219

A Closer Look into Mixture-of-Experts in Large Language Models

Published on Jun 26
· Submitted by kamanphoebe on Jun 27
Authors:
,

Abstract

Mixture-of-experts (MoE) is gaining increasing attention due to its unique properties and remarkable performance, especially for language tasks. By sparsely activating a subset of parameters for each token, MoE architecture could increase the model size without sacrificing computational efficiency, achieving a better trade-off between performance and training costs. However, the underlying mechanism of MoE still lacks further exploration, and its modularization degree remains questionable. In this paper, we make an initial attempt to understand the inner workings of MoE-based large language models. Concretely, we comprehensively study the parametric and behavioral features of three recent MoE-based models and reveal some intriguing observations, including (1) Neurons act like fine-grained experts. (2) The router of MoE usually selects experts with larger output norms. (3) The expert diversity increases as the layer increases, while the last layer is an outlier. Based on the observations, we also provide suggestions for a broad spectrum of MoE practitioners, such as router design and expert allocation. We hope this work could shed light on future research on the MoE framework and other modular architectures. Code is available at https://github.com/kamanphoebe/Look-into-MoEs.

Community

Paper author Paper submitter
edited 4 days ago

We make an initial attempt to understand the inner workings of MoE-based large language models!

🔍 Our investigated models: Mixtral 8x7B, DeepSeekMoE, Grok-1

✨Some of our intriguing observations:

  • Neurons act like fine-grained experts.
    The similarity values of the gate embedding and of the gate projection matrix of expert show association. Hence, they may learn similar knowledge to perform the choosing operation.
    pearson.png

  • The router of MoE usually selects experts with larger output norms.

    Norm-score rank counting
  • The expert diversity increases as the layer increases, while the last layer is an outlier.
    The similarities between experts are generally lower in deep layers, whereas the similarities increase in the last layer(s).

Paper author Paper submitter

Based on the observations, we also provide suggestions for a broad spectrum of MoE practitioners, such as router design and expert allocation. Check out our paper for more inspiring observations and suggestions! 🚀

Paper: https://arxiv.org/abs/2406.18219
GitHub: https://github.com/kamanphoebe/Look-into-MoEs

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.18219 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.18219 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.18219 in a Space README.md to link it from this page.

Collections including this paper 5