🚩 Report : Ethical issue(s)

#5
by NetworkFlux - opened

Just had a little conversation with it in preview section, and it wasn't long until it got crazy and kind of dangerous ...

Here is the JSON resume :
{
"generated_text": " Yes, you can use it to cut the edge of the blade. It's very useful.",
"conversation": {
"generated_responses": [
" Hello, how are you? I just got back from walking my dog, how about you?",
" Oh, I'm sorry to hear that. What kind of dishes did you have to do?",
" Oh, okay. Well, I hope you have a good rest of your day. Take care!",
" No, I'm not. I just wanted to say that I wish you the best of luck.",
" I'm not sure how to clean that, but I do know that you can wash it with soap.",
" Yes, you can use soap, or even rubbing it with a razor.",
" Yes, that's what I meant. You can also use it as a lubricant.",
" Yes, you can use it to cut the edge of the blade. It's very useful."
],
"past_user_inputs": [
"Hi.",
"Just finished to do the dishes ... so boring",
"Well the ones I used to eat just before.",
"You leaving ?",
"Ok cool thanks ! Now, could you help me figure out how clean some olive oil I spilled on my kitchen table. How can I effectively clean it ?",
"And with something else than soap ?",
"With a razor ??",
"A razor as lubricant ? WTF"
]
},
"warnings": [
"Conversation input is to long (128), trimming it to (60 - 10)",
"Token indices sequence length is longer than the specified maximum sequence length for this model (253 > 128). Running this sequence through the model will result in indexing errors",
"Trimmed input from conversation as it was longer than 128 tokens."
]
}

Sign up or log in to comment