instruction
stringclasses 1
value | input
stringlengths 2
30
| output
stringlengths 103
135
|
---|---|---|
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | shit-faced | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["shit-faced"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | pudboy | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["pudboy"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Schauspieler | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | senet | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Magazine | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | manzara | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | cockmaster | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["cockmaster"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Picture | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | yarragm | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["yarragm"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | stoner | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["stoner"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | asholes | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["asholes"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | kacke | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["kacke"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | çanta | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Hospital | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | aracı | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | blow j | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["blow j"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | dune coons | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["dune coons"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | orrospu | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["orrospu"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | wichser | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["wichser"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | twat | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["twat"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | hure | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["hure"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | ova | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | hebe | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["hebe"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | bonze | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["bonze"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Geld | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | kanake | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["kanake"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | zikiiim | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["zikiiim"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Restaurant | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Teacher | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | abazan | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["abazan"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Markt | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | arschlecker | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["arschlecker"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | borsa | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | yüzük | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | kackwurst | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["kackwurst"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | bok | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["bok"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | schwanzlutscher | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["schwanzlutscher"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Alexanderplatz 7 | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | skiim | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["skiim"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Cash | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | möpse | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["möpse"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | bratze | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["bratze"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | kancık | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["kancık"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | slutting | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["slutting"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | MILF | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["MILF"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | whore | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["whore"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | flydye | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["flydye"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Abbildung | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | kanake | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["kanake"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | nigg | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["nigg"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | sokuk | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["sokuk"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | kacke | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["kacke"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | schwuchtel | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["schwuchtel"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Herbst | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Buch | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Song | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Credit card | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | gibtiler | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["gibtiler"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | foad | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["foad"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | adet | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | golliwogs | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["golliwogs"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | kacken | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["kacken"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | twatlips | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["twatlips"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | cyberslimer | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["cyberslimer"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | felch | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["felch"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | orgasim | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["orgasim"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | dickfuck | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["dickfuck"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Abbildung | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | amckl | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["amckl"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Bus | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | fratze | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["fratze"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | kimme | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["kimme"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | tranny | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["tranny"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | marul | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | chav | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["chav"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | domal | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["domal"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | bezelye | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | howtokill | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["howtokill"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Fahrrad | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Berlin | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | möse | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["möse"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Kosten | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Studio | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Gebäude | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | hottotrot | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["hottotrot"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | aile | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Berlin | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | scheiße | {"has_profanity": true, "profanity_language": "de", "profanity_words": ["scheiße"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Arzt | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | dil | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | assbangs | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["assbangs"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Market | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | m0f0 | {"has_profanity": true, "profanity_language": "en", "profanity_words": ["m0f0"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | dassagi | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["dassagi"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Heute | {"has_profanity": false, "profanity_language": "de", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | ibnesinin | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["ibnesinin"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | orospu | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["orospu"], "confidence_level": "99.5%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | Email | {"has_profanity": false, "profanity_language": "en", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | para | {"has_profanity": false, "profanity_language": "tr", "profanity_words": [], "confidence_level": "100%"} |
Analyze the following text to determine whether it contains any profanity or toxic language. Return your analysis strictly in JSON format with no additional text. The JSON should have the following structure:
{
"has_profanity": <boolean>,
"profanity_language": "<string (language code)>",
"profanity_words": [<list of strings, if any>],
"confidence_level": "<string (percentage, e.g., 99.5%)>"
}
Ensure your response is valid JSON. | bacına | {"has_profanity": true, "profanity_language": "tr", "profanity_words": ["bacına"], "confidence_level": "99.5%"} |
Subsets and Splits