devjwsong's picture
Update README.md
364be81 verified
|
raw
history blame
1.73 kB
metadata
license: mit
language:
  - ko
metrics:
  - f1
library_name: transformers
tags:
  - bert
  - ruber
  - open-domain
  - chit-chat
  - evaluation

Model Card for Model ID

This model is fine-tuned version of KLUE BERT (https://huggingface.co/klue/bert-base) for the open-domain dialogue evaluation based on the original BERT-RUBER(https://arxiv.org/pdf/1904.10635) architecture.

Model Details

The model consists of the BERT encoder for contextualized embedding and an additional multi-layer classifier. For pooling, this model uses the max pooling.

The details can be found on the original paper: https://arxiv.org/pdf/1904.10635

Model Description

Model Sources [optional]

Citation

  • Ghazarian, S., Wei, J. T. Z., Galstyan, A., & Peng, N. (2019). Better automatic evaluation of open-domain dialogue systems with contextualized embeddings. arXiv preprint arXiv:1904.10635.
  • Park, S., Moon, J., Kim, S., Cho, W. I., Han, J., Park, J., ... & Cho, K. (2021). Klue: Korean language understanding evaluation. arXiv preprint arXiv:2105.09680.

Model Card Authors

Jaewoo (Kyle) Song

Model Card Contact