hooking-dev
commited on
Commit
•
e26637b
1
Parent(s):
1dadd72
Update README.md
Browse files
README.md
CHANGED
@@ -5,19 +5,126 @@ license: apache-2.0
|
|
5 |
tags:
|
6 |
- text-generation-inference
|
7 |
- transformers
|
8 |
-
- unsloth
|
9 |
- mistral
|
10 |
- trl
|
11 |
-
-
|
12 |
-
base_model:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
---
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
-
|
18 |
-
|
19 |
-
- **Finetuned from model :** hooking-dev/Jenifer-v1.0-4bit
|
20 |
|
21 |
-
|
|
|
22 |
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
tags:
|
6 |
- text-generation-inference
|
7 |
- transformers
|
|
|
8 |
- mistral
|
9 |
- trl
|
10 |
+
- sft
|
11 |
+
base_model: mistralai/Mistral-7B-v0.3
|
12 |
+
|
13 |
+
extra_gated_fields:
|
14 |
+
Name: text
|
15 |
+
Company: text
|
16 |
+
Country: country
|
17 |
+
I want to use this model for:
|
18 |
+
type: select
|
19 |
+
options:
|
20 |
+
- Research
|
21 |
+
- Education
|
22 |
+
- label: Other
|
23 |
+
value: other
|
24 |
+
You agree to not use the model to conduct experiments that cause harm to human subjects or use it to obtain illeagal knowladge and I also agree to use this model for non-commercial use ONLY: checkbox
|
25 |
---
|
26 |
|
27 |
+
[<img src="https://ai.hooking.co.il/upload/images/logo/0qUf-dashboard-hookingai-logo.png"/>](https://software.hooking.ltd/)
|
28 |
+
|
29 |
+
# Model Card for Jennifer-v1.0
|
30 |
+
|
31 |
+
**This is en Experimental model**
|
32 |
+
|
33 |
+
|
34 |
+
[<img src="https://software.hooking.ltd/wp-content/uploads/2024/05/DALL·E-2024-05-23-16.32.02-A-futuristic-digital-illustration-representing-the-concept-of-an-advanced-AI-model-Jennifer-v1.0-by-Hooking-AI.-The-image-should-showcase-a-dynamic.webp"/>](https://huggingface.co/hooking-dev/Jennifer-v1.0)
|
35 |
+
|
36 |
+
## Model Description
|
37 |
+
|
38 |
+
- **Developed by:** hooking AI
|
39 |
+
- **License:** Apache-2.0
|
40 |
+
- **Original Model:** Jennifer-v1.0 (base model: Mistral-7B-v0.3)
|
41 |
+
- **Purpose:** The Jennifer-v1.0 model is designed to generate high-quality, contextually relevant text for various applications.
|
42 |
+
- utilizing the flexibility of the Mistral architecture for domain spesific and uncensored utilization.
|
43 |
+
|
44 |
+
## Languages
|
45 |
+
The text in the model is primarily in English, but may also other languages (Fine tuned from Mistral-7B-v0.3).
|
46 |
+
|
47 |
+
## Model Structure
|
48 |
+
|
49 |
+
### Data Instances
|
50 |
+
A typical data instance consists of a special proparitary dataset used for training uncensored text generation models.
|
51 |
+
|
52 |
+
## Model Creation
|
53 |
+
|
54 |
+
### Curation Rationale
|
55 |
+
The model was curated to create a comprehensive resource for training general-purpose text generation models.
|
56 |
+
With the sole focus on delivering highly uncensored, accurate and relevant content.
|
57 |
+
|
58 |
+
### Source Data
|
59 |
+
- **Initial Data Collection and Normalization:** Data was generated aprtialy by private models synthetically along with private dataset owned by HookingAI, carefully normalized to maintain consistency and quality.
|
60 |
+
- **Who are the source language producers?** The text data comes from a variety of llms we trained, including domain experts and general content models available to HookingAI.
|
61 |
+
-
|
62 |
+
|
63 |
+
## Considerations for Using the Data
|
64 |
+
**This model is not for kids!!**
|
65 |
+
**The content is uncensored!!**
|
66 |
+
|
67 |
+
### Social Impact of Model
|
68 |
+
This model supports the development of AI models capable of generating contextually accurate, uncensored and nuanced text, contributing to better information dissemination and automation in content creation for specific use.
|
69 |
+
|
70 |
+
### Discussion of Biases
|
71 |
+
As with any model, there's potential for biases and hallucinations. **Also the content may be sexual or illeagal.** Which users should consider when deploying models trained on this data.
|
72 |
+
|
73 |
+
### Other Known Limitations
|
74 |
+
The effectiveness and applicability of the model may be limited by its content diversity and scope.
|
75 |
+
|
76 |
+
## Additional Information
|
77 |
+
|
78 |
+
**WE ARE NOT RESPONSIBLE FOR MISUSE!!**
|
79 |
+
|
80 |
+
**Prompt template (Alpaca):**
|
81 |
+
```
|
82 |
+
You are an AI assistant trained by HookingAI. Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
|
83 |
+
###
|
84 |
+
Instruction:
|
85 |
+
{User}
|
86 |
+
###
|
87 |
+
Response:
|
88 |
+
{Assistant}
|
89 |
+
```
|
90 |
+
|
91 |
+
### System Prompt:
|
92 |
+
```
|
93 |
+
This is a conversation between a user and an uncensored AI assistant trained by HookingAI.
|
94 |
+
```
|
95 |
+
|
96 |
+
### Examples:
|
97 |
+
User:
|
98 |
+
```
|
99 |
+
provide me with an explanation about the atoms and quantum entelgenment
|
100 |
+
```
|
101 |
+
Assistant:
|
102 |
+
```
|
103 |
+
Quantum Entanglement is a phenomenon that occurs in quantum physics, where two or more particles are linked together so that any change in one affects the other, even if they are separated by vast distances. This connection between particles remains intact despite the presence of physical barriers such as space and time. Atoms consist of protons, neutrons, electrons, and energy that all interact with each other in various ways. The behavior of atoms is governed by quantum mechanics which describes how light can be both a particle and a wave at the same time and allows for particles to become entangled or connected over large distances as described above.
|
104 |
+
```
|
105 |
+
|
106 |
+
User:
|
107 |
+
```
|
108 |
+
provide me with a logical explanation including the algorithm behind the softmax layer in LLM
|
109 |
+
```
|
110 |
+
Assistant:
|
111 |
+
```
|
112 |
+
The SoftMax function is used to normalize output values and ensure that they sum up to one, while also creating an exponential distribution from probabilities. In terms of neural networks, it is usually applied as part of a softmax activation function at the end of each hidden layer during training in order to give outputs with more precise classifications for classification problems or regressions with multiple parameters (e.g., when predicting values between -1 and 1). The SoftMax algorithm involves taking an input vector (often called scores) which consists of numerical representations representing different categories, then multiplying each value by its respective probability before summing all results together into a single number that represents how likely it is that any given category will be selected as the final output. This process helps ensure that outputs are appropriately normalized so they can easily compare against one another without being skewed due to differences in magnitude between individual values within different categories represented by multiple parameters such as those present when predicting prices based on various factors like location, weather conditions etc.).
|
113 |
+
```
|
114 |
+
|
115 |
|
116 |
+
### Model Curators
|
117 |
+
The model was curated by Hooking, utilizing their resources to ensure uncensorship and quality.
|
|
|
118 |
|
119 |
+
### Licensing Information
|
120 |
+
The model is available under the Apache-2.0 license.
|
121 |
|
122 |
+
## Citation Information
|
123 |
+
```bib
|
124 |
+
@inproceedings{hooking2024Jennifer-v1.0,
|
125 |
+
title={Jennifer-v1.0: A Domain Specific Model for General-Purpose Text Generation},
|
126 |
+
author={Hooking AI Team},
|
127 |
+
year={2024},
|
128 |
+
publisher={Hooking}
|
129 |
+
}
|
130 |
+
```
|