File size: 6,333 Bytes
152c5ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c0b3f6b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
152c5ed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
---
license: apache-2.0
language:
- en
- fr
- de
- es
- it
- pt
- ru
- zh
- ja
extra_gated_description: If you want to learn more about how we process your personal
  data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
base_model: mistralai/Mistral-Nemo-Base-2407
tags:
- llama-cpp
- gguf-my-repo
---

# Triangle104/Mistral-Nemo-Base-2407-Q8_0-GGUF
This model was converted to GGUF format from [`mistralai/Mistral-Nemo-Base-2407`](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mistralai/Mistral-Nemo-Base-2407) for more details on the model.

---
Model details:
-
The Mistral-Nemo-Base-2407 Large Language Model (LLM) is a pretrained generative text model of 12B parameters trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.

For more details about this model please refer to our release blog post.

Key features
-
    Released under the Apache 2 License
    Pre-trained and instructed versions
    Trained with a 128k context window
    Trained on a large proportion of multilingual and code data
    Drop-in replacement of Mistral 7B

Model Architecture
-
Mistral Nemo is a transformer model, with the following architecture choices:

    Layers: 40
    Dim: 5,120
    Head dim: 128
    Hidden dim: 14,436
    Activation Function: SwiGLU
    Number of heads: 32
    Number of kv-heads: 8 (GQA)
    Vocabulary size: 2**17 ~= 128k
    Rotary embeddings (theta = 1M)

Metrics
Main Benchmarks
Benchmark 	Score
HellaSwag (0-shot) 	83.5%
Winogrande (0-shot) 	76.8%
OpenBookQA (0-shot) 	60.6%
CommonSenseQA (0-shot) 	70.4%
TruthfulQA (0-shot) 	50.3%
MMLU (5-shot) 	68.0%
TriviaQA (5-shot) 	73.8%
NaturalQuestions (5-shot) 	31.2%
Multilingual Benchmarks (MMLU)
Language 	Score
French 	62.3%
German 	62.7%
Spanish 	64.6%
Italian 	61.3%
Portuguese 	63.3%
Russian 	59.2%
Chinese 	59.0%
Japanese 	59.0%
Usage

The model can be used with three different frameworks

    mistral_inference: See here
    transformers: See here
    NeMo: See nvidia/Mistral-NeMo-12B-Base

Mistral Inference
-
Install
-
It is recommended to use mistralai/Mistral-Nemo-Base-2407 with mistral-inference. For HF transformers code snippets, please keep scrolling.

pip install mistral_inference

Download
-
from huggingface_hub import snapshot_download
from pathlib import Path

mistral_models_path = Path.home().joinpath('mistral_models', 'Nemo-v0.1')
mistral_models_path.mkdir(parents=True, exist_ok=True)

snapshot_download(repo_id="mistralai/Mistral-Nemo-Base-2407", allow_patterns=["params.json", "consolidated.safetensors", "tekken.json"], local_dir=mistral_models_path)

Demo
-
After installing mistral_inference, a mistral-demo CLI command should be available in your environment.

mistral-demo $HOME/mistral_models/Nemo-v0.1

Transformers
-
    NOTE: Until a new release has been made, you need to install transformers from source:

    pip install git+https://github.com/huggingface/transformers.git

If you want to use Hugging Face transformers to generate text, you can do something like this.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "mistralai/Mistral-Nemo-Base-2407"
tokenizer = AutoTokenizer.from_pretrained(model_id)

model = AutoModelForCausalLM.from_pretrained(model_id)
inputs = tokenizer("Hello my name is", return_tensors="pt")

outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

    Unlike previous Mistral models, Mistral Nemo requires smaller temperatures. We recommend to use a temperature of 0.3.

Note
-
Mistral-Nemo-Base-2407 is a pretrained base model and therefore does not have any moderation mechanisms.
The Mistral AI Team

Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Alok Kothari, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Augustin Garreau, Austin Birky, Bam4d, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Carole Rambaud, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gaspard Blanchet, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Hichem Sattouf, Ian Mack, Jean-Malo Delignon, Jessica Chudnovsky, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickaël Seznec, Nicolas Schuhl, Niklas Muhs, Olivier de Garrigues, Patrick von Platen, Paul Jacob, Pauline Buche, Pavan Kumar Reddy, Perry Savas, Pierre Stock, Romain Sauvestre, Sagar Vaze, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibault Schueller, Thibaut Lavril, Thomas Wang, Théophile Gervet, Timothée Lacroix, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall

---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)

```bash
brew install llama.cpp

```
Invoke the llama.cpp server or the CLI.

### CLI:
```bash
llama-cli --hf-repo Triangle104/Mistral-Nemo-Base-2407-Q8_0-GGUF --hf-file mistral-nemo-base-2407-q8_0.gguf -p "The meaning to life and the universe is"
```

### Server:
```bash
llama-server --hf-repo Triangle104/Mistral-Nemo-Base-2407-Q8_0-GGUF --hf-file mistral-nemo-base-2407-q8_0.gguf -c 2048
```

Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```

Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```

Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Mistral-Nemo-Base-2407-Q8_0-GGUF --hf-file mistral-nemo-base-2407-q8_0.gguf -p "The meaning to life and the universe is"
```
or 
```
./llama-server --hf-repo Triangle104/Mistral-Nemo-Base-2407-Q8_0-GGUF --hf-file mistral-nemo-base-2407-q8_0.gguf -c 2048
```