|
--- |
|
datasets: |
|
- vicgalle/worldsim-claude-opus |
|
- macadeliccc/opus_samantha |
|
- anthracite-org/kalo-opus-instruct-22k-no-refusal |
|
- lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-9.5K-ShareGPT |
|
- lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K |
|
- QuietImpostor/Sao10K-Claude-3-Opus-Instruct-15K-ShareGPT |
|
- ChaoticNeutrals/Luminous_Opus |
|
- kalomaze/Opus_Instruct_3k |
|
- kalomaze/Opus_Instruct_25k |
|
language: |
|
- en |
|
base_model: |
|
- meta-llama/Llama-3.1-8B |
|
pipeline_tag: text-generation |
|
license: llama3.1 |
|
--- |
|
|
|
![L3.1-8B-Fabula](https://files.catbox.moe/blwlvb.jpeg) |
|
|
|
# L3.1-8B-Fabula |
|
|
|
L3.1-8B-Fabula is a fine-tuned version of Facebook's LLaMA 3.1 8B model, specifically optimized for roleplay and general knowledge tasks. |
|
|
|
## Model Details |
|
|
|
- **Base Model**: [Llama-3.1-8B](https://hf.co/meta-llama/Llama-3.1-8B) |
|
- **Chat Template**: ChatML |
|
- **Max Input Tokens**: 32,768 |
|
- **Datasets Used In Fine-tuning:** |
|
* [vicgalle/worldsim-claude-opus](https://hf.co/datasets/vicgalle/worldsim-claude-opus) |
|
* [macadeliccc/opus_samantha](https://hf.co/datasets/macadeliccc/opus_samantha) |
|
* [anthracite-org/kalo-opus-instruct-22k-no-refusal](https://hf.co/datasets/anthracite-org/kalo-opus-instruct-22k-no-refusal) |
|
* [lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-9.5K-ShareGPT](https://hf.co/datasets/lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-9.5K-ShareGPT) |
|
* [lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K](https://hf.co/datasets/lodrick-the-lafted/Sao10K_Claude-3-Opus-Instruct-3.3K) |
|
* [QuietImpostor/Sao10K-Claude-3-Opus-Instruct-15K-ShareGPT](https://hf.co/datasets/QuietImpostor/Sao10K-Claude-3-Opus-Instruct-15K-ShareGPT) |
|
* [ChaoticNeutrals/Luminous_Opus](https://hf.co/datasets/ChaoticNeutrals/Luminous_Opus) |
|
* [kalomaze/Opus_Instruct_3k](https://hf.co/datasets/kalomaze/Opus_Instruct_3k) |
|
* [kalomaze/Opus_Instruct_25k](https://hf.co/datasets/kalomaze/Opus_Instruct_25k) |
|
|
|
## Chat Template |
|
- In the finetuning ChatML were used. |
|
```js |
|
function chatml2(messages) { |
|
/** |
|
* @param {Array<{role: string, name: string, content: string}>} messages |
|
* @returns {{prompt: string, stop: string}} |
|
* @description Formats messages into ChatML template format |
|
*/ |
|
const isLastMessageAssistant = messages[messages.length - 1]?.role === "assistant"; |
|
|
|
return { |
|
prompt: messages.map((message, index) => { |
|
const nameStr = message.name ? ` [${message.name}]` : ""; |
|
const isLast = index === messages.length - 1; |
|
const needsEndTag = !isLastMessageAssistant || !isLast; |
|
|
|
return `<|im_start|>${message.role.toLowerCase()}${nameStr}\n${message.content}${needsEndTag ? "<|im_end|>" : ""}`; |
|
}).join("\n") + (isLastMessageAssistant ? "" : "\n<|im_start|>assistant\n"), |
|
stop: "<|im_end|>" |
|
}; |
|
} |
|
``` |
|
|
|
I would highly recommend you add a set of rules in assistant role at the end of the chat history, like this example below: |
|
```md |
|
<rules for="{{char}}'s responses"> |
|
1. I will write a response as {{char}} in a short manner and will keep it detailed (I will try to keep it under 300 characters). |
|
|
|
2. Response formatting: |
|
"This is for talking" |
|
*This is for doing an action/ or self-reflection if I decide to write {{char}}'s response in first-person* |
|
ex: "Hello, there!" *{name} waves,* "How are you doing today?" |
|
|
|
3. When I feel like it is needed for {{user}} to talk, I will not act as {{user}} or for them, I will simply stop generating more text via executing my EOS (end-of-string) token "<|im_end|>", to let the user write their response as {{user}} |
|
|
|
4. I will use my past messages as an example of how {{char}} speaks |
|
</rules> |
|
**{{char}}'s response:** |
|
|
|
``` |