|
--- |
|
license: creativeml-openrail-m |
|
language: |
|
- en |
|
base_model: |
|
- nvidia/Mistral-NeMo-Minitron-8B-Instruct |
|
datasets: |
|
- HuggingFaceH4/ultrachat_200k |
|
tags: |
|
- Minitron |
|
- Llama |
|
- Ultrachat |
|
--- |
|
|
|
**Minitron-8B-Instruct-200K-GGUF** |
|
|
|
| Attribute | Description | |
|
|-------------------------|-----------------------------------------------------------------------------| |
|
| **Developed by** | prithivMLmods | |
|
| **License** | creativeml-openrail-m | |
|
| **Finetuned from model**| nvidia/Mistral-NeMo-Minitron-8B-Instruct | |
|
|
|
**Model File** |
|
|
|
| File Name | Size | Description | |
|
|--------------------------------------------|----------|---------------------------------------------------------------------------| |
|
| `.gitattributes` | 1.83kB | Git configuration file specifying attributes and LFS rules. | |
|
| `Minitron-8B-Instruct-200K-GGUF.F16.gguf` | 16.8GB | Full precision 16-bit float model file for Minitron 8B with 200K steps. | |
|
| `Minitron-8B-Instruct-200K-GGUF.Q4_K_M.gguf`| 5.15GB | Quantized 4-bit model file, optimized for memory usage. | |
|
| `Minitron-8B-Instruct-200K-GGUF.Q5_K_M.gguf`| 6GB | Quantized 5-bit model file, balanced for performance and accuracy. | |
|
| `Minitron-8B-Instruct-200K-GGUF.Q8_0.gguf` | 8.95GB | Quantized 8-bit model file, moderate accuracy and memory optimization. | |
|
| `README.md` | 76B | Markdown file with project information and instructions. | |
|
| `config.json` | 33B | JSON configuration file for setting model parameters. | |
|
|
|
# Run with Ollama 🦙 |
|
|
|
### Download and Install Ollama |
|
|
|
To get started, download Ollama from [https://ollama.com/download](https://ollama.com/download) and install it on your Windows or Mac system. |
|
|
|
### Run Your Own Model in Minutes |
|
|
|
### Steps to Run GGUF Models: |
|
|
|
#### 1. Create the Model File |
|
- Name your model file appropriately, for example, `metallama200`. |
|
|
|
#### 2. Add the Template Command |
|
- Include a `FROM` line with the base model file. For instance: |
|
|
|
```bash |
|
FROM Llama-3.2-8B-GGUF-200K |
|
``` |
|
|
|
- Make sure the model file is in the same directory as your script. |
|
|
|
#### 3. Create and Patch the Model |
|
- Use the following command in your terminal to create and patch your model: |
|
|
|
```bash |
|
ollama create metallama200 -f ./metallama200 |
|
``` |
|
|
|
- Upon success, a confirmation message will appear. |
|
|
|
- To verify that the model was created successfully, run: |
|
|
|
```bash |
|
ollama list |
|
``` |
|
|
|
Ensure that `metallama200` appears in the list of models. |
|
|
|
--- |
|
|
|
## Running the Model |
|
|
|
To run the model, use: |
|
|
|
```bash |
|
ollama run metallama200 |
|
``` |
|
|
|
### Sample Usage |
|
|
|
In the command prompt, run: |
|
|
|
```bash |
|
D:\>ollama run metallama200 |
|
``` |
|
|
|
Example interaction: |
|
|
|
```plaintext |
|
>>> write a mini passage about space x |
|
Space X, the private aerospace company founded by Elon Musk, is revolutionizing the field of space exploration. |
|
With its ambitious goals to make humanity a multi-planetary species and establish a sustainable human presence in |
|
the cosmos, Space X has become a leading player in the industry. The company's spacecraft, like the Falcon 9, have |
|
demonstrated remarkable capabilities, allowing for the transport of crews and cargo into space with unprecedented |
|
efficiency. As technology continues to advance, the possibility of establishing permanent colonies on Mars becomes |
|
increasingly feasible, thanks in part to the success of reusable rockets that can launch multiple times without |
|
sustaining significant damage. The journey towards becoming a multi-planetary species is underway, and Space X |
|
plays a pivotal role in pushing the boundaries of human exploration and settlement. |
|
``` |
|
|
|
--- |