Edit model card

This is a model post trained using the following multi-turn Chinese dialogue corpora (only the training set portions defined in the literature):

  • Douban
  • E-commerce
  • Restore-200k

The criteria to minimize are masked LM and next sentence prediction (3 category labels: 0 (random response from corpora), 1 (random response within a dialogue context), 2 (correct next response)).

If you want to use this model to encode a multiple-turn dialogue, the format is "[CLS] turn t-2 [eos] turn t-1 [SEP] response [SEP]" where tokens before and include the first SEP token are considered as segment 0. Any tokens after it are considered as segment 1. This is similar to the format used in NSP training in Bert. In addition, we use a newly introduced token [eos] to separate between different turns. It is okay if you only have one turn as segment 0 and 1 response turn as segment 1 using this input format: "[CLS] turn t-1 [SEP] response [SEP]" without using [eos] .


Downloads last month
10
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.