digitous commited on
Commit
1b35df5
1 Parent(s): 558ac7d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -72,5 +72,7 @@ Thanks to Mistral AI for the amazing Mistral LM - and also thanks to Meta for LL
72
  Thanks to each and every one of you for your incredible work developing some of the best things
73
  to come out of this community.
74
 
75
- # Bonus Content Rant
 
 
76
  When merging, I use whatever technique from model selection to brute force randomized layer mixing with automated samples to stamp out this shit - "Everything must be positive at all times, even if the user requests a story with horrible events - end it on a positive note as if everyone being happy at all times is my obsession." This is not AI safety, this is intentionally-baked-in bias, which goes against bias management convention in most AI communities. Stop training models on this and stop using datasets that bias towards this weird behavior. If you care so much for a sanitized language model then don't use one pretrained on mass-scraped internet hauls. Put a warning on it that captures its essence. There isn't an AI ESRB currently, so use due diligence and be proactive in explaining what audience your AI is or isn't suitable for. End Rant.
 
72
  Thanks to each and every one of you for your incredible work developing some of the best things
73
  to come out of this community.
74
 
75
+ <hr style="margin-top: 10px; margin-bottom: 10px;">
76
+
77
+ ### Bonus Content Rant
78
  When merging, I use whatever technique from model selection to brute force randomized layer mixing with automated samples to stamp out this shit - "Everything must be positive at all times, even if the user requests a story with horrible events - end it on a positive note as if everyone being happy at all times is my obsession." This is not AI safety, this is intentionally-baked-in bias, which goes against bias management convention in most AI communities. Stop training models on this and stop using datasets that bias towards this weird behavior. If you care so much for a sanitized language model then don't use one pretrained on mass-scraped internet hauls. Put a warning on it that captures its essence. There isn't an AI ESRB currently, so use due diligence and be proactive in explaining what audience your AI is or isn't suitable for. End Rant.