umisetokikaze
commited on
Commit
•
f4d0524
1
Parent(s):
6bd5c75
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,41 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
- ja
|
6 |
+
tags:
|
7 |
+
- finetuned
|
8 |
+
library_name: transformers
|
9 |
+
pipeline_tag: text-generation
|
10 |
+
---
|
11 |
+
<img src="./wabisabi_logo.jpg" width="100%" height="20%" alt="">
|
12 |
+
|
13 |
+
## Model Card for Wabisabi-v1.0
|
14 |
+
|
15 |
+
The Mistral-7B--based Large Language Model (LLM) is an noveldataset fine-tuned version of the Mistral-7B-v0.1
|
16 |
+
|
17 |
+
wabisabi has the following changes compared to Mistral-7B-v0.1.
|
18 |
+
- 128k context window (8k context in v0.1)
|
19 |
+
- Achieving both high quality Japanese and English generation
|
20 |
+
- Can be generated NSFW
|
21 |
+
- Memory ability that does not forget even after long-context generation
|
22 |
+
|
23 |
+
This model was created with the help of GPUs from the first LocalAI hackathon.
|
24 |
+
|
25 |
+
We would like to take this opportunity to thank
|
26 |
+
|
27 |
+
## List of Creation Methods
|
28 |
+
|
29 |
+
- Chatvector for multiple models
|
30 |
+
- Simple linear merging of result models
|
31 |
+
- Domain and Sentence Enhancement with LORA
|
32 |
+
- Context expansion
|
33 |
+
|
34 |
+
## Instruction format
|
35 |
+
|
36 |
+
Vicuna-v1.1
|
37 |
+
|
38 |
+
## Other points to keep in mind
|
39 |
+
- The training data may be biased. Be careful with the generated sentences.
|
40 |
+
- Memory usage may be large for long inferences.
|
41 |
+
- If possible, we recommend inferring with llamacpp rather than Transformers.
|