bmarie4i commited on
Commit
3696977
1 Parent(s): a12cce9

Model card

Browse files
Files changed (1) hide show
  1. README.md +60 -0
README.md CHANGED
@@ -1,3 +1,63 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - disfluency identification
7
  ---
8
+
9
+ # Model Card for Model ID
10
+
11
+ <!-- Provide a quick summary of what the model is/does. -->
12
+
13
+ This BERT model classifies a dialogue system's user utterance as fluent or disfluent.
14
+
15
+ ## Model Details
16
+
17
+ ### Model Description
18
+
19
+ <!-- Provide a longer summary of what this model is. -->
20
+
21
+
22
+
23
+ - **Developed by:** 4i Intelligent Insights
24
+ - **Model type:** BERT base cased
25
+ - **Language(s) (NLP):** English
26
+ - **License:** cc-by-nc-sa-4.0
27
+
28
+ ### Model Sources
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** To be announced
33
+ - **Paper:** To be announced
34
+
35
+ ## Uses
36
+
37
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
38
+
39
+ The model is intended to be used for classifying English utterances of users interacting with a dialogue system. In our evaluation, the user utterances were speech transcriptions.
40
+
41
+
42
+ ## Out-of-Scope Use
43
+
44
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
45
+
46
+ This model has not been evaluated to be used on machine-generated text.
47
+
48
+
49
+ ## Bias, Risks, and Limitations
50
+
51
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
52
+
53
+ This model may not be accurate with non-native English speakers.
54
+
55
+
56
+
57
+
58
+ ## Training Data
59
+
60
+ <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
61
+
62
+ The model has been fine-tuned on the Fisher English Corpus:
63
+ http://github.com/joshua-decoder/fisher-callhome-corpus