Update README.md
Browse files
README.md
CHANGED
@@ -12,9 +12,9 @@ base_model:
|
|
12 |
- yleo/OgnoMonarch-7B
|
13 |
---
|
14 |
|
15 |
-
#
|
16 |
|
17 |
-
|
18 |
* [eren23/ogno-monarch-jaskier-merge-7b](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b)
|
19 |
* [liminerity/Omningotex-7b-slerp](https://huggingface.co/liminerity/Omningotex-7b-slerp)
|
20 |
* [yleo/OgnoMonarch-7B](https://huggingface.co/yleo/OgnoMonarch-7B)
|
@@ -53,7 +53,7 @@ from transformers import AutoTokenizer
|
|
53 |
import transformers
|
54 |
import torch
|
55 |
|
56 |
-
model = "mayacinka/
|
57 |
messages = [{"role": "user", "content": "What is a large language model?"}]
|
58 |
|
59 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
|
|
12 |
- yleo/OgnoMonarch-7B
|
13 |
---
|
14 |
|
15 |
+
# ramonda-monarch-7b
|
16 |
|
17 |
+
ramonda-monarch-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
|
18 |
* [eren23/ogno-monarch-jaskier-merge-7b](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b)
|
19 |
* [liminerity/Omningotex-7b-slerp](https://huggingface.co/liminerity/Omningotex-7b-slerp)
|
20 |
* [yleo/OgnoMonarch-7B](https://huggingface.co/yleo/OgnoMonarch-7B)
|
|
|
53 |
import transformers
|
54 |
import torch
|
55 |
|
56 |
+
model = "mayacinka/ramonda-monarch-7b"
|
57 |
messages = [{"role": "user", "content": "What is a large language model?"}]
|
58 |
|
59 |
tokenizer = AutoTokenizer.from_pretrained(model)
|