metadata
license: mit
language:
- ko
metrics:
- f1
library_name: transformers
tags:
- bert
- ruber
- open-domain
- chit-chat
- evaluation
Model Card for Model ID
This model is fine-tuned version of KLUE BERT (https://huggingface.co/klue/bert-base) for the open-domain dialogue evaluation based on the original BERT-RUBER(https://arxiv.org/pdf/1904.10635) architecture.
Model Details
The model consists of the BERT encoder for contextualized embedding and an additional multi-layer classifier. For pooling, this model uses the max pooling.
The details can be found on the original paper: https://arxiv.org/pdf/1904.10635
Model Description
- Developed by: devjwsong
- Model type: BertModel + MLP
- Language(s) (NLP): Korean
- License: MIT
- Finetuned from model [optional]: klue/bert-base (https://huggingface.co/klue/bert-base)
Model Sources [optional]
- Repository: https://github.com/devjwsong/bert-ruber-kor-pytorch
- Paper: https://arxiv.org/pdf/1904.10635
Citation
- Ghazarian, S., Wei, J. T. Z., Galstyan, A., & Peng, N. (2019). Better automatic evaluation of open-domain dialogue systems with contextualized embeddings. arXiv preprint arXiv:1904.10635.
- Park, S., Moon, J., Kim, S., Cho, W. I., Han, J., Park, J., ... & Cho, K. (2021). Klue: Korean language understanding evaluation. arXiv preprint arXiv:2105.09680.
Model Card Authors
Jaewoo (Kyle) Song