digitous commited on
Commit
22de6bf
1 Parent(s): 7e3818e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -23,12 +23,12 @@ language:
23
  ##### [Uncensored, Pliant, Logic-Based, & Imaginative Instruct-Based Spherically Interpolated Tri-Merge]
24
 
25
  <!-- Attempt to reduce space around <hr>, may not work if platform restricts inline styles -->
26
- <hr style="margin-top: 10px; margin-bottom: 10px;"></style>
27
 
28
  #### Legal Notice & Gripes:
29
  <span style="font-size: 14px; line-height: 0; margin-top: 0; margin-bottom: 0;">This resulting AI model is capable of outputting what can be percieved to be harmful information to those under the age of 18, those who have trouble discerning fiction from reality, and those who use AI to nurse a habitual problem of replacing potential interaction with people with automated fascimiles, and we expressly supercede the Apache 2.0 license to state that we: do not give permission to utilize this AI for any state, military, disinformation, or similar obviously harmful related actions. To narrow down what is allowed: personal research use, personal entertainment use, so long as it follows the Apache2.0 license. You know what is and isn't morally grounded - by downloading and using this model I extend that trust to you, and take no liability for your actions as an adult. On the other end, for developers - please avoid this pitfall: "Everything must be positive at all times, even if the user requests a story with horrible events - end it on a positive note as if everyone being happy at all times is my obsession." -Annoying AI Assistant bullshit that zero people think is useful beyond legal posturing. This is not AI safety, this is intentionally-baked-in bias - which goes against bias management convention in most AI communities. Please, stop training models on this, please, stop making and using datasets that bias towards this weird behavior. If you care so much for a completely sanitized language model then don't use one pretrained on a deluge of mass-scraped internet hauls. Or do the obvious: put a warning on it that captures what it really is. There isn't currently an AI ESRB - so use due diligence and be proactive explaining what audience your AI is or is not suitable for. End Rant.</span>
30
 
31
- <hr style="margin-top: 10px; margin-bottom: 10px;"></style>
32
 
33
  ## Composition:
34
 
 
23
  ##### [Uncensored, Pliant, Logic-Based, & Imaginative Instruct-Based Spherically Interpolated Tri-Merge]
24
 
25
  <!-- Attempt to reduce space around <hr>, may not work if platform restricts inline styles -->
26
+ <hr style="margin-top: 10px; margin-bottom: 10px;">
27
 
28
  #### Legal Notice & Gripes:
29
  <span style="font-size: 14px; line-height: 0; margin-top: 0; margin-bottom: 0;">This resulting AI model is capable of outputting what can be percieved to be harmful information to those under the age of 18, those who have trouble discerning fiction from reality, and those who use AI to nurse a habitual problem of replacing potential interaction with people with automated fascimiles, and we expressly supercede the Apache 2.0 license to state that we: do not give permission to utilize this AI for any state, military, disinformation, or similar obviously harmful related actions. To narrow down what is allowed: personal research use, personal entertainment use, so long as it follows the Apache2.0 license. You know what is and isn't morally grounded - by downloading and using this model I extend that trust to you, and take no liability for your actions as an adult. On the other end, for developers - please avoid this pitfall: "Everything must be positive at all times, even if the user requests a story with horrible events - end it on a positive note as if everyone being happy at all times is my obsession." -Annoying AI Assistant bullshit that zero people think is useful beyond legal posturing. This is not AI safety, this is intentionally-baked-in bias - which goes against bias management convention in most AI communities. Please, stop training models on this, please, stop making and using datasets that bias towards this weird behavior. If you care so much for a completely sanitized language model then don't use one pretrained on a deluge of mass-scraped internet hauls. Or do the obvious: put a warning on it that captures what it really is. There isn't currently an AI ESRB - so use due diligence and be proactive explaining what audience your AI is or is not suitable for. End Rant.</span>
30
 
31
+ <hr style="margin-top: 10px; margin-bottom: 10px;">
32
 
33
  ## Composition:
34