File size: 2,444 Bytes
49525af
 
 
 
 
 
 
 
 
afac8eb
49525af
afac8eb
 
 
 
 
 
 
49525af
 
 
 
 
 
 
 
 
 
 
 
afac8eb
49525af
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: alltasks_m1-t1
  results: []
---

# InstructDial

Instruction tuning is an emergent paradigm in NLP wherein natural language instructions are leveraged with language models to induce zero-shot performance on unseen tasks. Instructions have been shown to enable good performance on unseen tasks and datasets in both large and small language models. Dialogue is an especially interesting area to explore instruction tuning because dialogue systems perform multiple kinds of tasks related to language (e.g., natural language understanding and generation, domain-specific interaction), yet instruction tuning has not been systematically explored for dialogue-related tasks. We introduce InstructDial, an instruction tuning framework for dialogue, which consists of a repository of 48 diverse dialogue tasks in a unified text-to-text format created from 59 openly available dialogue datasets. Next, we explore cross-task generalization ability on models tuned on InstructDial across diverse dialogue tasks. Our analysis reveals that InstructDial enables good zero-shot performance on unseen datasets and tasks such as dialogue evaluation and intent detection, and even better performance in a few-shot setting. To ensure that models adhere to instructions, we introduce novel meta-tasks. We establish benchmark zero-shot and few-shot performance of models trained using the proposed framework on multiple dialogue tasks. 

[Paper](https://arxiv.org/abs/2205.12673)


# Dial_BART0
BART-large type model trained on InstructDial tasks. This model is a fine-tuned version of [yuchenlin/BART0pp](https://huggingface.co/yuchenlin/BART0pp) on the InstructDial datasets.


## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

All tasks in InstructDial framework (including all dialogue eval tasks)

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 9
- eval_batch_size: 9
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 72
- total_eval_batch_size: 72
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0


### Framework versions

- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1