Edit model card

Model Card for Hello-SimpleAI/chatgpt-detector-roberta

This model is trained on the mix of full-text and splitted sentences of answers from Hello-SimpleAI/HC3.

More details refer to arxiv: 2301.07597 and Gtihub project Hello-SimpleAI/chatgpt-comparison-detection.

The base checkpoint is roberta-base. We train it with all Hello-SimpleAI/HC3 data (without held-out) for 1 epoch.

(1-epoch is consistent with the experiments in our paper.)

Citation

Checkout this papaer arxiv: 2301.07597

@article{guo-etal-2023-hc3,
    title = "How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection",
    author = "Guo, Biyang  and
      Zhang, Xin  and
      Wang, Ziyuan  and
      Jiang, Minqi  and
      Nie, Jinran  and
      Ding, Yuxuan  and
      Yue, Jianwei  and
      Wu, Yupeng",
    journal={arXiv preprint arxiv:2301.07597}
    year = "2023",
}
Downloads last month
4,651
Inference API

Model tree for Hello-SimpleAI/chatgpt-detector-roberta

Finetunes
1 model

Dataset used to train Hello-SimpleAI/chatgpt-detector-roberta

Spaces using Hello-SimpleAI/chatgpt-detector-roberta 24