|
--- |
|
base_model: [] |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
- llama-3 |
|
- 70b |
|
--- |
|
|
|
# EXL2 quants of [ryzen88/Llama-3-70b-Arimas-story-RP-V1](https://huggingface.co/ryzen88/Llama-3-70b-Arimas-story-RP-V1) |
|
|
|
[3.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1-3.0bpw-exl2) |
|
[3.50 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1-3.5bpw-exl2) |
|
[4.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1-4.0bpw-exl2) |
|
[4.50 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1-4.5bpw-exl2) |
|
[5.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1-5.0bpw-exl2) |
|
[6.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1-6.0bpw-exl2) |
|
[8.00 bits per weight](https://huggingface.co/kim512/Llama-3-70b-Arimas-story-RP-V1-8.0bpw-exl2) |
|
|
|
# Llama-3-70b-Arimas-story-RP-V1 |
|
|
|
This is really a followup and improvement off my original Lumi-Tess model. |
|
|
|
# model |
|
A large context uncencored Llama 3 instruct model focussed on story & RP. |
|
I found the Smaug version of lama very impressive, exept for a couple of quirks and the default context window. |
|
This merge is with the Giraffe instruct for long context window, and basically a smaug - lumi tess merger. |
|
I am planning to do the same with a gradient model and compaire it to this giraffe version. |
|
Breadcrumbs_ties really is awesome. |
|
|
|
|
|
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). |
|
A big thanks to the creators of the models used in this merge |
|
|
|
## Merge Details |
|
### Merge Method |
|
|
|
This model was merged using the breadcrumbs_ties merge method using Z:\Llama-3-Giraffe-70B-Instruct as a base. |
|
|
|
### Models Merged |
|
|
|
The following models were included in the merge: |
|
* \Smaug-Llama-3-70B-Instruct |
|
* \Llama-3-Lumimaid-70B-v0.1-alt |
|
* \Tess-2.0-Llama-3-70B-v0.2 |
|
|
|
### Configuration |
|
|
|
The following YAML configuration was used to produce this model: |
|
|
|
```yaml |
|
models: |
|
- model: \Llama-3-Giraffe-70B-Instruct |
|
parameters: |
|
weight: 0.25 |
|
density: 0.90 |
|
gamma: 0.01 |
|
- model: \Smaug-Llama-3-70B-Instruct |
|
parameters: |
|
weight: 0.30 |
|
density: 0.90 |
|
gamma: 0.01 |
|
- model: \Tess-2.0-Llama-3-70B-v0.2 |
|
parameters: |
|
weight: 0.15 |
|
density: 0.90 |
|
gamma: 0.01 |
|
- model: \Llama-3-Lumimaid-70B-v0.1-alt |
|
parameters: |
|
weight: 0.30 |
|
density: 0.90 |
|
gamma: 0.01 |
|
merge_method: breadcrumbs_ties |
|
base_model: \Llama-3-Giraffe-70B-Instruct |
|
dtype: bfloat16 |
|
``` |