sergioburdisso commited on
Commit
a85edbb
1 Parent(s): b15e034

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -18
README.md CHANGED
@@ -1,16 +1,26 @@
1
  ---
2
- pipeline_tag: sentence-similarity
 
 
3
  tags:
4
  - sentence-transformers
5
- - feature-extraction
6
  - sentence-similarity
7
- - transformers
8
-
 
 
 
 
 
9
  ---
10
 
11
- # sergioburdisso/dialog2flow-single-dse-base
12
 
13
- This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
 
 
 
 
 
14
 
15
  <!--- Describe your model here -->
16
 
@@ -26,7 +36,7 @@ Then you can use the model like this:
26
 
27
  ```python
28
  from sentence_transformers import SentenceTransformer
29
- sentences = ["This is an example sentence", "Each sentence is converted"]
30
 
31
  model = SentenceTransformer('sergioburdisso/dialog2flow-single-dse-base')
32
  embeddings = model.encode(sentences)
@@ -51,7 +61,7 @@ def mean_pooling(model_output, attention_mask):
51
 
52
 
53
  # Sentences we want sentence embeddings for
54
- sentences = ['This is an example sentence', 'Each sentence is converted']
55
 
56
  # Load model from HuggingFace Hub
57
  tokenizer = AutoTokenizer.from_pretrained('sergioburdisso/dialog2flow-single-dse-base')
@@ -71,21 +81,23 @@ print("Sentence embeddings:")
71
  print(sentence_embeddings)
72
  ```
73
 
 
 
74
 
 
75
 
76
- ## Evaluation Results
77
-
78
- <!--- Describe how your model was evaluated -->
79
-
80
- For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sergioburdisso/dialog2flow-single-dse-base)
81
 
 
82
 
83
- ## Training
84
- The model was trained with the parameters:
85
 
86
  **DataLoader**:
87
 
88
- `torch.utils.data.dataloader.DataLoader` of length 24615 with parameters:
89
  ```
90
  {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
91
  ```
@@ -98,7 +110,7 @@ Parameters of the fit()-Method:
98
  ```
99
  {
100
  "epochs": 15,
101
- "evaluation_steps": 246,
102
  "evaluator": [
103
  "spretrainer.evaluation.FewShotClassificationEvaluator.FewShotClassificationEvaluator"
104
  ],
@@ -124,4 +136,22 @@ SentenceTransformer(
124
 
125
  ## Citing & Authors
126
 
127
- <!--- Describe where people can find more information -->
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: en
3
+ license: mit
4
+ library_name: sentence-transformers
5
  tags:
6
  - sentence-transformers
 
7
  - sentence-similarity
8
+ - task-oriented-dialogues
9
+ - dialog-flow
10
+ datasets:
11
+ - Salesforce/dialogstudio
12
+ pipeline_tag: sentence-similarity
13
+ base_model:
14
+ - aws-ai/dse-bert-base
15
  ---
16
 
 
17
 
18
+ # Dialog2Flow single target (DSE-base)
19
+
20
+ This a variation of the **D2F$_{single}$** model introduced in the paper ["Dialog2Flow: Pre-training Soft-Contrastive Action-Driven Sentence Embeddings for Automatic Dialog Flow Extraction"](https://publications.idiap.ch/attachments/papers/2024/Burdisso_EMNLP2024_2024.pdf) published in the EMNLP 2024 main conference.
21
+ This version uses DSE-base as the backbone model which yields to an increase in performance as compared to the vanilla version using BERT-base as the backbone (results reported in Appendix C).
22
+
23
+ Implementation-wise, this is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or search.
24
 
25
  <!--- Describe your model here -->
26
 
 
36
 
37
  ```python
38
  from sentence_transformers import SentenceTransformer
39
+ sentences = ["your phone please", "okay may i have your telephone number please"]
40
 
41
  model = SentenceTransformer('sergioburdisso/dialog2flow-single-dse-base')
42
  embeddings = model.encode(sentences)
 
61
 
62
 
63
  # Sentences we want sentence embeddings for
64
+ sentences = ['your phone please', 'okay may i have your telephone number please']
65
 
66
  # Load model from HuggingFace Hub
67
  tokenizer = AutoTokenizer.from_pretrained('sergioburdisso/dialog2flow-single-dse-base')
 
81
  print(sentence_embeddings)
82
  ```
83
 
84
+ ## Training
85
+ The model was trained with the parameters:
86
 
87
+ **DataLoader**:
88
 
89
+ `torch.utils.data.dataloader.DataLoader` of length 363506 with parameters:
90
+ ```
91
+ {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
92
+ ```
 
93
 
94
+ **Loss**:
95
 
96
+ `spretrainer.losses.LabeledContrastiveLoss.LabeledContrastiveLoss`
 
97
 
98
  **DataLoader**:
99
 
100
+ `torch.utils.data.dataloader.DataLoader` of length 49478 with parameters:
101
  ```
102
  {'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
103
  ```
 
110
  ```
111
  {
112
  "epochs": 15,
113
+ "evaluation_steps": 164,
114
  "evaluator": [
115
  "spretrainer.evaluation.FewShotClassificationEvaluator.FewShotClassificationEvaluator"
116
  ],
 
136
 
137
  ## Citing & Authors
138
 
139
+
140
+ ```bibtex
141
+ @inproceedings{burdisso-etal-2024-dialog2flow,
142
+ title = "Dialog2Flow: Pre-training Soft-Contrastive Action-Driven Sentence Embeddings for Automatic Dialog Flow Extraction",
143
+ author = "Burdisso, Sergio and
144
+ Madikeri, Srikanth and
145
+ Motlicek, Petr",
146
+ booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing",
147
+ month = nov,
148
+ year = "2024",
149
+ address = "Miami",
150
+ publisher = "Association for Computational Linguistics",
151
+ }
152
+ ```
153
+
154
+ ## License
155
+
156
+ Copyright (c) 2024 [Idiap Research Institute](https://www.idiap.ch/).
157
+ MIT License.