Update README.md
Browse files
README.md
CHANGED
@@ -5,6 +5,7 @@ tags:
|
|
5 |
- merge
|
6 |
- mergekit
|
7 |
- french
|
|
|
8 |
---
|
9 |
# Vigalpaca-French-7B-ties-GGUF
|
10 |
|
@@ -14,6 +15,8 @@ bofenghuang/vigostral-7b-chat
|
|
14 |
|
15 |
base model : jpacifico/French-Alpaca-7B-Instruct-beta
|
16 |
|
|
|
|
|
17 |
This quantized q8_0 GGUF version can be used on a CPU device, compatible with llama.cpp Ollama and LM Studio.
|
18 |
|
19 |
Ollama Modelfile example :
|
@@ -26,4 +29,14 @@ PARAMETER stop "[INST]"
|
|
26 |
PARAMETER stop "[/INST]"
|
27 |
SYSTEM """Tu es un assistant IA nommé Vigalpaca. Tu dois répondre de manière concise et bienveillante aux questions posées par l'utilisateur."""
|
28 |
"""
|
29 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
- merge
|
6 |
- mergekit
|
7 |
- french
|
8 |
+
license: apache-2.0
|
9 |
---
|
10 |
# Vigalpaca-French-7B-ties-GGUF
|
11 |
|
|
|
15 |
|
16 |
base model : jpacifico/French-Alpaca-7B-Instruct-beta
|
17 |
|
18 |
+
### Usage
|
19 |
+
|
20 |
This quantized q8_0 GGUF version can be used on a CPU device, compatible with llama.cpp Ollama and LM Studio.
|
21 |
|
22 |
Ollama Modelfile example :
|
|
|
29 |
PARAMETER stop "[/INST]"
|
30 |
SYSTEM """Tu es un assistant IA nommé Vigalpaca. Tu dois répondre de manière concise et bienveillante aux questions posées par l'utilisateur."""
|
31 |
"""
|
32 |
+
```
|
33 |
+
|
34 |
+
### Limitations
|
35 |
+
|
36 |
+
The Vigalpaca model is a quick demonstration that a base 7B model can be easily merged/fine-tuned to specialize in a particular language.
|
37 |
+
It does not have any moderation mechanisms.
|
38 |
+
|
39 |
+
- **Developed by:** Jonathan Pacifico. Vigostral model by Bofeng Huang (special thanks), 2024
|
40 |
+
- **Model type:** LLM
|
41 |
+
- **Language(s) (NLP):** French
|
42 |
+
- **License:** Apache-2.0
|