Text Generation
MLX
mistral

Code snippet, yaml

#3
by pcuenq HF staff - opened
Files changed (1) hide show
  1. README.md +25 -5
README.md CHANGED
@@ -1,3 +1,11 @@
 
 
 
 
 
 
 
 
1
  # Mistral-7B-v0.1
2
 
3
  The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
@@ -5,9 +13,21 @@ Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
5
 
6
  For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
7
 
8
- ## Model Architecture
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
- Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
11
- - Grouped-Query Attention
12
- - Sliding-Window Attention
13
- - Byte-fallback BPE tokenizer
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ inference: false
4
+ tags:
5
+ - mistral
6
+ - mlx
7
+ ---
8
+
9
  # Mistral-7B-v0.1
10
 
11
  The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters.
 
13
 
14
  For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/announcing-mistral-7b/).
15
 
16
+ This repository contains the `mistral-7B-v0.1` weights in `npz` format suitable for use with Apple's MLX framework.
17
+
18
+ ## Use with MLX
19
+
20
+ ```bash
21
+ pip install mlx
22
+ pip install huggingface_hub hf_transfer
23
+ git clone https://github.com/ml-explore/mlx-examples.git
24
+
25
+ # Download model
26
+ export HF_HUB_ENABLE_HF_TRANSFER=1
27
+ huggingface-cli download --local-dir-use-symlinks False --local-dir mistral-7B-v0.1 mlx-community/mistral-7B-v0.1
28
+
29
+ # Run example
30
+ python mlx-examples/mistral/mistral.py --prompt "My name is" --model_path mistral-7B-v0.1
31
+ ```
32
 
33
+ Please, refer to the [original model card](https://huggingface.co/mistralai/Mistral-7B-v0.1) for more details on Mistral-7B-v0.1.