Librarian Bot: Add moe tag to model

#10
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -1,13 +1,15 @@
1
  ---
2
- license: apache-2.0
3
  language:
4
  - fr
5
  - it
6
  - de
7
  - es
8
  - en
9
- inference: false
10
  library_name: mlx
 
 
 
11
  ---
12
  # Model Card for Mixtral-8x7B
13
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.
 
1
  ---
 
2
  language:
3
  - fr
4
  - it
5
  - de
6
  - es
7
  - en
8
+ license: apache-2.0
9
  library_name: mlx
10
+ tags:
11
+ - moe
12
+ inference: false
13
  ---
14
  # Model Card for Mixtral-8x7B
15
  The Mixtral-8x7B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts. The Mixtral-8x7B outperforms Llama 2 70B on most benchmarks we tested.