File size: 3,513 Bytes
89bfba4
 
 
 
 
 
 
73889e3
89bfba4
 
 
c5cd581
89bfba4
8f0dd85
 
 
 
 
8acd4ec
 
8f0dd85
 
 
 
 
89bfba4
 
 
 
 
 
 
22e624a
89bfba4
 
 
 
 
 
 
 
 
 
 
 
 
 
fce13de
 
 
 
 
 
 
89bfba4
 
 
 
 
 
 
 
 
0e361ea
 
 
 
da0e5d7
0e361ea
da0e5d7
 
0e361ea
da0e5d7
 
0e361ea
da0e5d7
0e361ea
da0e5d7
 
184fd7c
da0e5d7
 
 
 
0e361ea
 
 
 
 
 
 
73889e3
0e361ea
 
 
 
 
 
 
 
73889e3
0e361ea
d36005a
0e361ea
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
license: apache-2.0
language:
- en
- ja
tags:
- finetuned
- not-for-all-audiences
library_name: transformers
pipeline_tag: text-generation
---
<img src="./ninjalogo.svg" width="100%" height="20%" alt=""> 

# Our Models
- [Vecteus](https://huggingface.co/Local-Novel-LLM-project/Vecteus-v1)

- [Ninja-v1](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1) 

- [Ninja-v1-NSFW](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW)

- [Ninja-v1-128k](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-128k)

- [Ninja-v1-NSFW-128k](https://huggingface.co/Local-Novel-LLM-project/Ninja-v1-NSFW-128k)

## Model Card for Ninja-v1-NSFW-128k

The Mistral-7B--based Large Language Model (LLM) is an noveldataset fine-tuned version of the Mistral-7B-v0.1

Ninja-NSFW-128k has the following changes compared to Mistral-7B-v0.1.
- 128k context window (8k context in v0.1)
- Achieving both high quality Japanese and English generation
- Memory ability that does not forget even after long-context generation
- Can be generated NSFW

This model was created with the help of GPUs from the first LocalAI hackathon.

We would like to take this opportunity to thank

## List of Creation Methods

- Chatvector for multiple models
- Simple linear merging of result models
- Domain and Sentence Enhancement with LORA
- Context expansion

## Instruction format

  Ninja adopts the prompt format from Vicuna and supports multi-turn conversation.
  The prompt should be as following:
  ```
  USER: Hi ASSISTANT: Hello.</s>
  USER: Who are you?
  ASSISTANT: I am ninja.</s>
  ```

## Example prompts to improve (Japanese)

  - BAD: あなたは○○として振る舞います
  - GOOD: あなたは○○です

  - BAD: あなたは○○ができます
  - GOOD: あなたは○○をします

## Performing inference

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "Local-Novel-LLM-project/Ninja-v1-NSFW-128k"
new_tokens = 1024

model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, torch_dtype=torch.float16, attn_implementation="flash_attention_2", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained(model_id)

system_prompt = "あなたはプロの小説家です。\n小説を書いてください\n-------- "

prompt = input("Enter a prompt: ")
system_prompt += prompt + "\n-------- "
model_inputs = tokenizer([system_prompt], return_tensors="pt").to("cuda")


generated_ids = model.generate(**model_inputs, max_new_tokens=new_tokens, do_sample=True)
print(tokenizer.batch_decode(generated_ids)[0])
````

## Merge recipe

- WizardLM2 - mistralai/Mistral-7B-v0.1
- NousResearch/Yarn-Mistral-7b-128k - mistralai/Mistral-7B-v0.1
- Elizezen/Antler-7B - stabilityai/japanese-stablelm-instruct-gamma-7b
- Elizezen/LewdSniffyOtter-7B - Elizezen/SniffyOtter-7B
- NTQAI/chatntq-ja-7b-v1.0

The characteristics of each model are as follows.

- WizardLM2: High quality multitasking model
- Yarn-Mistral-7b-128k: Mistral model with 128k context window
- Antler-7B: Model specialized for novel writing
- NTQAI/chatntq-ja-7b-v1.0 High quality Japanese specialized model
- Elizezen/LewdSniffyOtter-7B Japanese NSFW specialized model

## Other points to keep in mind
- The training data may be biased. Be careful with the generated sentences.
- Set trust_remote_code to True for context expansion with YaRN.
- Memory usage may be large for long inferences.
- If possible, we recommend inferring with llamacpp rather than Transformers.