Text Generation
Transformers
Safetensors
GGUF
English
mistral
Not-For-All-Audiences
Inference Endpoints
text-generation-inference
File size: 1,744 Bytes
8c7f8b5
 
 
 
 
 
755d702
 
 
 
 
 
 
 
 
 
 
8c7f8b5
 
 
755d702
 
 
 
 
 
 
 
 
 
 
8c7f8b5
755d702
8c7f8b5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
755d702
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
base_model:
- ResplendentAI/Datura_7B
- ChaoticNeutrals/Cookie_7B
library_name: transformers
tags:
- not-for-all-audiences
license: apache-2.0
datasets:
- ResplendentAI/Luna_NSFW_Text
- unalignment/toxic-dpo-v0.2
- ResplendentAI/Synthetic_Soul_1k
- grimulkan/theory-of-mind
- lemonilia/LimaRP
- PygmalionAI/PIPPA
language:
- en
---
# DaturaCookie

![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/5jG2dft51fgPcGUGc-4Ym.png)

Proficient at roleplaying and lightehearted conversation, this model is prone to NSFW outputs.

# Vision/multimodal capabilities:

If you want to use vision functionality:

You must use the latest versions of Koboldcpp. To use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo.

You can load the mmproj by using the corresponding section in the interface:

![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/UxH8OteeRbD1av1re0yNZ.png)


### Models Merged

The following models were included in the merge:
* [ResplendentAI/Datura_7B](https://huggingface.co/ResplendentAI/Datura_7B)
* [ChaoticNeutrals/Cookie_7B](https://huggingface.co/ChaoticNeutrals/Cookie_7B)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
slices:
  - sources:
      - model: ChaoticNeutrals/Cookie_7B
        layer_range: [0, 32]
      - model: ResplendentAI/Datura_7B
        layer_range: [0, 32]
merge_method: slerp
base_model: ResplendentAI/Datura_7B
parameters:
  t:
    - filter: self_attn
      value: [1, 0.75, 0.5, 0.25, 0]
    - filter: mlp
      value: [0, 0.25, 0.5, 0.75, 1]
    - value: 0.5
dtype: bfloat16
```