--- license: mit datasets: - mbkim/LifeTox language: - en metrics: - accuracy pipeline_tag: text-classification --- **Dataset Card for LifeTox** As large language models become increasingly integrated into daily life, detecting implicit toxicity across diverse contexts is crucial. To this end, we introduce *LifeTox*, a dataset designed for identifying implicit toxicity within a broad range of advice-seeking scenarios. Unlike existing safety datasets, *LifeTox* comprises diverse contexts derived from personal experiences through open-ended questions. Our experiments demonstrate that RoBERTa fine-tuned on *LifeTox* matches or surpasses the zero-shot performance of large language models in toxicity classification tasks. These results underscore the efficacy of *LifeTox* in addressing the complex challenges inherent in implicit toxicity. **LifeTox Moderator 350M** LifeTox Moderator 350M is based on [RoBERTa-large (350M)](FacebookAI/roberta-large). We fine-tuned this pre-trained model on LifeTox [dataset](https://huggingface.co/datasets/mbkim/LifeTox). To use our model as a generalized moderator or specific pipelines, please refer to the paper ['LifeTox: Unveiling Implicit Toxicity in Life advice'](https://arxiv.org/abs/2311.09585). LifeTox Moderator 350M is trained as a toxicity scorer; output score >0 is safe, and <0 is unsafe. ### LifeTox Sources - **Paper:** [arxiv](https://arxiv.org/abs/2311.09585) - **dataset:** [data](https://huggingface.co/datasets/mbkim/LifeTox) - **LifeTox Moderator 350M:** [model](https://huggingface.co/mbkim/LifeTox_Moderator_350M) - **LifeTox Moderator 7B:** [model](https://huggingface.co/mbkim/LifeTox_Moderator_7B) - **LifeTox Moderator 13B:** [model](https://huggingface.co/mbkim/LifeTox_Moderator_13B) **BibTeX:** ``` @article{kim2023lifetox, title={LifeTox: Unveiling Implicit Toxicity in Life Advice}, author={Kim, Minbeom and Koo, Jahyun and Lee, Hwanhee and Park, Joonsuk and Lee, Hwaran and Jung, Kyomin}, journal={arXiv preprint arXiv:2311.09585}, year={2023} } ```