Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
intent-classification
Languages:
English
Size:
< 1K
ArXiv:
License:
Commit
•
0e4202f
0
Parent(s):
Update files from the datasets library (from 1.2.0)
Browse filesRelease notes: https://github.com/huggingface/datasets/releases/tag/1.2.0
- .gitattributes +27 -0
- README.md +148 -0
- dataset_infos.json +1 -0
- dummy/0.0.0/dummy_data.zip +3 -0
- snips_built_in_intents.py +124 -0
.gitattributes
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
4 |
+
*.bin.* filter=lfs diff=lfs merge=lfs -text
|
5 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
11 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
12 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
13 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
14 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
15 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
16 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
17 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
18 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
19 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
20 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
21 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
22 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
23 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
24 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
25 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
26 |
+
*.zstandard filter=lfs diff=lfs merge=lfs -text
|
27 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
README.md
ADDED
@@ -0,0 +1,148 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- expert-generated
|
4 |
+
language_creators:
|
5 |
+
- expert-generated
|
6 |
+
languages:
|
7 |
+
- en
|
8 |
+
licenses:
|
9 |
+
- cc0-1-0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
size_categories:
|
13 |
+
- n<1K
|
14 |
+
source_datasets:
|
15 |
+
- original
|
16 |
+
task_categories:
|
17 |
+
- text-classification
|
18 |
+
task_ids:
|
19 |
+
- intent-classification
|
20 |
+
---
|
21 |
+
|
22 |
+
# Dataset Card for Snips Built In Intents
|
23 |
+
|
24 |
+
## Table of Contents
|
25 |
+
- [Dataset Description](#dataset-description)
|
26 |
+
- [Dataset Summary](#dataset-summary)
|
27 |
+
- [Supported Tasks](#supported-tasks-and-leaderboards)
|
28 |
+
- [Languages](#languages)
|
29 |
+
- [Dataset Structure](#dataset-structure)
|
30 |
+
- [Data Instances](#data-instances)
|
31 |
+
- [Data Fields](#data-instances)
|
32 |
+
- [Data Splits](#data-instances)
|
33 |
+
- [Dataset Creation](#dataset-creation)
|
34 |
+
- [Curation Rationale](#curation-rationale)
|
35 |
+
- [Source Data](#source-data)
|
36 |
+
- [Annotations](#annotations)
|
37 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
38 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
39 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
40 |
+
- [Discussion of Biases](#discussion-of-biases)
|
41 |
+
- [Other Known Limitations](#other-known-limitations)
|
42 |
+
- [Additional Information](#additional-information)
|
43 |
+
- [Dataset Curators](#dataset-curators)
|
44 |
+
- [Licensing Information](#licensing-information)
|
45 |
+
- [Citation Information](#citation-information)
|
46 |
+
|
47 |
+
## Dataset Description
|
48 |
+
|
49 |
+
- **Homepage:** https://github.com/sonos/nlu-benchmark/tree/master/2016-12-built-in-intents
|
50 |
+
- **Repository:** https://github.com/sonos/nlu-benchmark/tree/master/2016-12-built-in-intents
|
51 |
+
- **Paper:** https://arxiv.org/abs/1805.10190
|
52 |
+
- **Point of Contact:** The Snips team has joined Sonos in November 2019. These open datasets remain available and their access is now managed by the Sonos Voice Experience Team. Please email sve-research@sonos.com with any question.
|
53 |
+
|
54 |
+
### Dataset Summary
|
55 |
+
|
56 |
+
Snips' built in intents dataset was initially used to compare different voice assistants and released as a public dataset hosted at
|
57 |
+
https://github.com/sonos/nlu-benchmark in folder 2016-12-built-in-intents. The dataset contains 328 utterances over 10 intent classes.
|
58 |
+
A related Medium post is https://medium.com/snips-ai/benchmarking-natural-language-understanding-systems-d35be6ce568d.
|
59 |
+
|
60 |
+
### Supported Tasks and Leaderboards
|
61 |
+
|
62 |
+
There are no related shared tasks that we are aware of.
|
63 |
+
|
64 |
+
### Languages
|
65 |
+
|
66 |
+
English
|
67 |
+
|
68 |
+
## Dataset Structure
|
69 |
+
|
70 |
+
### Data Instances
|
71 |
+
|
72 |
+
The dataset contains 328 utterances over 10 intent classes. Each sample looks like:
|
73 |
+
`{'label': 8, 'text': 'Transit directions to Barcelona Pizza.'}`
|
74 |
+
|
75 |
+
### Data Fields
|
76 |
+
|
77 |
+
- `text`: The text utterance expressing some user intent.
|
78 |
+
- `label`: The intent label of the piece of text utterance.
|
79 |
+
|
80 |
+
### Data Splits
|
81 |
+
|
82 |
+
The source data is not split.
|
83 |
+
|
84 |
+
## Dataset Creation
|
85 |
+
|
86 |
+
### Curation Rationale
|
87 |
+
|
88 |
+
The dataset was originally created to compare the performance of a number of voice assistants. However, the labelled utterances are useful
|
89 |
+
for developing and benchmarking text chatbots as well.
|
90 |
+
|
91 |
+
### Source Data
|
92 |
+
|
93 |
+
#### Initial Data Collection and Normalization
|
94 |
+
|
95 |
+
It is not clear how the data was collected. From the Medium post: `The benchmark relies on a set of 328 queries built by the business team
|
96 |
+
at Snips, and kept secret from data scientists and engineers throughout the development of the solution.`
|
97 |
+
|
98 |
+
#### Who are the source language producers?
|
99 |
+
|
100 |
+
Originally prepared by snips.ai. The Snips team has since joined Sonos in November 2019. These open datasets remain available and their
|
101 |
+
access is now managed by the Sonos Voice Experience Team. Please email sve-research@sonos.com with any question.
|
102 |
+
|
103 |
+
### Annotations
|
104 |
+
|
105 |
+
#### Annotation process
|
106 |
+
|
107 |
+
It is not clear how the data was collected. From the Medium post: `The benchmark relies on a set of 328 queries built by the business team
|
108 |
+
at Snips, and kept secret from data scientists and engineers throughout the development of the solution.`
|
109 |
+
|
110 |
+
#### Who are the annotators?
|
111 |
+
|
112 |
+
[More Information Needed]
|
113 |
+
|
114 |
+
### Personal and Sensitive Information
|
115 |
+
|
116 |
+
[More Information Needed]
|
117 |
+
|
118 |
+
## Considerations for Using the Data
|
119 |
+
|
120 |
+
### Social Impact of Dataset
|
121 |
+
|
122 |
+
[More Information Needed]
|
123 |
+
|
124 |
+
### Discussion of Biases
|
125 |
+
|
126 |
+
[More Information Needed]
|
127 |
+
|
128 |
+
### Other Known Limitations
|
129 |
+
|
130 |
+
[More Information Needed]
|
131 |
+
|
132 |
+
## Additional Information
|
133 |
+
|
134 |
+
### Dataset Curators
|
135 |
+
|
136 |
+
Originally prepared by snips.ai. The Snips team has since joined Sonos in November 2019. These open datasets remain available and their
|
137 |
+
access is now managed by the Sonos Voice Experience Team. Please email sve-research@sonos.com with any question.
|
138 |
+
|
139 |
+
### Licensing Information
|
140 |
+
|
141 |
+
The source data is licensed under Creative Commons Zero v1.0 Universal.
|
142 |
+
|
143 |
+
### Citation Information
|
144 |
+
|
145 |
+
Any publication based on these datasets must include a full citation to the following paper in which the results were published by the Snips Team:
|
146 |
+
|
147 |
+
Coucke A. et al., "Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces." CoRR 2018,
|
148 |
+
https://arxiv.org/abs/1805.10190
|
dataset_infos.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"default": {"description": "Snips' built in intents dataset was initially used to compare different voice assistants and released as a public dataset hosted at\nhttps://github.com/sonos/nlu-benchmark 2016-12-built-in-intents. The dataset contains 328 utterances over 10 intent classes. The\nrelated paper mentioned on the github page is https://arxiv.org/abs/1805.10190 and a related Medium post is\nhttps://medium.com/snips-ai/benchmarking-natural-language-understanding-systems-d35be6ce568d .\n", "citation": "@article{DBLP:journals/corr/abs-1805-10190,\n author = {Alice Coucke and\n Alaa Saade and\n Adrien Ball and\n Th{'{e}}odore Bluche and\n Alexandre Caulier and\n David Leroy and\n Cl{'{e}}ment Doumouro and\n Thibault Gisselbrecht and\n Francesco Caltagirone and\n Thibaut Lavril and\n Ma{\"{e}}l Primet and\n Joseph Dureau},\n title = {Snips Voice Platform: an embedded Spoken Language Understanding system\n for private-by-design voice interfaces},\n journal = {CoRR},\n volume = {abs/1805.10190},\n year = {2018},\n url = {http://arxiv.org/abs/1805.10190},\n archivePrefix = {arXiv},\n eprint = {1805.10190},\n timestamp = {Mon, 13 Aug 2018 16:46:59 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-1805-10190.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "https://github.com/sonos/nlu-benchmark/tree/master/2016-12-built-in-intents", "license": "", "features": {"text": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"num_classes": 10, "names": ["ComparePlaces", "RequestRide", "GetWeather", "SearchPlace", "GetPlaceDetails", "ShareCurrentLocation", "GetTrafficInformation", "BookRestaurant", "GetDirections", "ShareETA"], "names_file": null, "id": null, "_type": "ClassLabel"}}, "post_processed": null, "supervised_keys": null, "builder_name": "snips_built_in_intents", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 19431, "num_examples": 328, "dataset_name": "snips_built_in_intents"}}, "download_checksums": {"https://raw.githubusercontent.com/sonos/nlu-benchmark/master/2016-12-built-in-intents/benchmark_data.json": {"num_bytes": 9130264, "checksum": "e3f6ba7b7ab0e8d1a5959a8c8ecb4fc566a281f4ebd34fdf1160929c630d299f"}}, "download_size": 9130264, "post_processing_size": null, "dataset_size": 19431, "size_in_bytes": 9149695}}
|
dummy/0.0.0/dummy_data.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4e0e1fc5f5396f649d77fa2a866b14ffd86bbaf70f7060150e0c8410a35d1200
|
3 |
+
size 3038
|
snips_built_in_intents.py
ADDED
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2020 The HuggingFace Datasets Authors.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
|
16 |
+
# Lint as: python3
|
17 |
+
"""Snips built in intents (2016-12-built-in-intents) dataset."""
|
18 |
+
|
19 |
+
from __future__ import absolute_import, division, print_function
|
20 |
+
|
21 |
+
import json
|
22 |
+
|
23 |
+
import datasets
|
24 |
+
|
25 |
+
|
26 |
+
_DESCRIPTION = """\
|
27 |
+
Snips' built in intents dataset was initially used to compare different voice assistants and released as a public dataset hosted at
|
28 |
+
https://github.com/sonos/nlu-benchmark 2016-12-built-in-intents. The dataset contains 328 utterances over 10 intent classes. The
|
29 |
+
related paper mentioned on the github page is https://arxiv.org/abs/1805.10190 and a related Medium post is
|
30 |
+
https://medium.com/snips-ai/benchmarking-natural-language-understanding-systems-d35be6ce568d .
|
31 |
+
"""
|
32 |
+
|
33 |
+
_CITATION = """\
|
34 |
+
@article{DBLP:journals/corr/abs-1805-10190,
|
35 |
+
author = {Alice Coucke and
|
36 |
+
Alaa Saade and
|
37 |
+
Adrien Ball and
|
38 |
+
Th{\'{e}}odore Bluche and
|
39 |
+
Alexandre Caulier and
|
40 |
+
David Leroy and
|
41 |
+
Cl{\'{e}}ment Doumouro and
|
42 |
+
Thibault Gisselbrecht and
|
43 |
+
Francesco Caltagirone and
|
44 |
+
Thibaut Lavril and
|
45 |
+
Ma{\"{e}}l Primet and
|
46 |
+
Joseph Dureau},
|
47 |
+
title = {Snips Voice Platform: an embedded Spoken Language Understanding system
|
48 |
+
for private-by-design voice interfaces},
|
49 |
+
journal = {CoRR},
|
50 |
+
volume = {abs/1805.10190},
|
51 |
+
year = {2018},
|
52 |
+
url = {http://arxiv.org/abs/1805.10190},
|
53 |
+
archivePrefix = {arXiv},
|
54 |
+
eprint = {1805.10190},
|
55 |
+
timestamp = {Mon, 13 Aug 2018 16:46:59 +0200},
|
56 |
+
biburl = {https://dblp.org/rec/journals/corr/abs-1805-10190.bib},
|
57 |
+
bibsource = {dblp computer science bibliography, https://dblp.org}
|
58 |
+
}
|
59 |
+
"""
|
60 |
+
|
61 |
+
_DOWNLOAD_URL = (
|
62 |
+
"https://raw.githubusercontent.com/sonos/nlu-benchmark/master/2016-12-built-in-intents/benchmark_data.json"
|
63 |
+
)
|
64 |
+
|
65 |
+
|
66 |
+
class SnipsBuiltInIntents(datasets.GeneratorBasedBuilder):
|
67 |
+
"""Snips built in intents (2016-12-built-in-intents) dataset."""
|
68 |
+
|
69 |
+
def _info(self):
|
70 |
+
# ToDo: Consider adding an alternate configuration for the entity slots. The default is to only return the intent labels.
|
71 |
+
|
72 |
+
return datasets.DatasetInfo(
|
73 |
+
description=_DESCRIPTION,
|
74 |
+
features=datasets.Features(
|
75 |
+
{
|
76 |
+
"text": datasets.Value("string"),
|
77 |
+
"label": datasets.features.ClassLabel(
|
78 |
+
names=[
|
79 |
+
"ComparePlaces",
|
80 |
+
"RequestRide",
|
81 |
+
"GetWeather",
|
82 |
+
"SearchPlace",
|
83 |
+
"GetPlaceDetails",
|
84 |
+
"ShareCurrentLocation",
|
85 |
+
"GetTrafficInformation",
|
86 |
+
"BookRestaurant",
|
87 |
+
"GetDirections",
|
88 |
+
"ShareETA",
|
89 |
+
]
|
90 |
+
),
|
91 |
+
}
|
92 |
+
),
|
93 |
+
homepage="https://github.com/sonos/nlu-benchmark/tree/master/2016-12-built-in-intents",
|
94 |
+
citation=_CITATION,
|
95 |
+
)
|
96 |
+
|
97 |
+
def _split_generators(self, dl_manager):
|
98 |
+
# Note: The source dataset doesn't have a train-test split.
|
99 |
+
# ToDo: Consider splitting the data into train-test sets and re-hosting.
|
100 |
+
samples_path = dl_manager.download_and_extract(_DOWNLOAD_URL)
|
101 |
+
return [
|
102 |
+
datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": samples_path}),
|
103 |
+
]
|
104 |
+
|
105 |
+
def _generate_examples(self, filepath):
|
106 |
+
"""Snips built in intent examples."""
|
107 |
+
num_examples = 0
|
108 |
+
|
109 |
+
with open(filepath, encoding="utf-8") as file_obj:
|
110 |
+
snips_dict = json.load(file_obj)
|
111 |
+
domains = snips_dict["domains"]
|
112 |
+
|
113 |
+
for domain_dict in domains:
|
114 |
+
intents = domain_dict["intents"]
|
115 |
+
|
116 |
+
for intent_dict in intents:
|
117 |
+
label = intent_dict["benchmark"]["Snips"]["original_intent_name"]
|
118 |
+
queries = intent_dict["queries"]
|
119 |
+
|
120 |
+
for query_dict in queries:
|
121 |
+
query_text = query_dict["text"]
|
122 |
+
|
123 |
+
yield num_examples, {"text": query_text, "label": label}
|
124 |
+
num_examples += 1 # Explicitly keep track of the number of examples.
|