autonlp-data-processor
commited on
Commit
•
704cf00
1
Parent(s):
c0478cb
Processed data from autonlp data processor ([2021-11-20 18:00 ]
Browse files- README.md +61 -0
- processed/dataset_dict.json +1 -0
- processed/train/dataset.arrow +3 -0
- processed/train/dataset_info.json +37 -0
- processed/train/state.json +18 -0
- processed/valid/dataset.arrow +3 -0
- processed/valid/dataset_info.json +37 -0
- processed/valid/state.json +18 -0
README.md
ADDED
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
task_categories:
|
3 |
+
- conditional-text-generation
|
4 |
+
|
5 |
+
---
|
6 |
+
# AutoNLP Dataset for project: Scientific_Title_Generator
|
7 |
+
|
8 |
+
## Table of content
|
9 |
+
- [Dataset Description](#dataset-description)
|
10 |
+
- [Languages](#languages)
|
11 |
+
- [Dataset Structure](#dataset-structure)
|
12 |
+
- [Data Instances](#data-instances)
|
13 |
+
- [Data Fields](#data-fields)
|
14 |
+
- [Data Splits](#data-splits)
|
15 |
+
|
16 |
+
## Dataset Descritpion
|
17 |
+
|
18 |
+
This dataset has been automatically processed by AutoNLP for project Scientific_Title_Generator.
|
19 |
+
|
20 |
+
### Languages
|
21 |
+
|
22 |
+
The BCP-47 code for the dataset's language is unk.
|
23 |
+
|
24 |
+
## Dataset Structure
|
25 |
+
|
26 |
+
### Data Instances
|
27 |
+
|
28 |
+
A sample from this dataset looks as follows:
|
29 |
+
|
30 |
+
```json
|
31 |
+
[
|
32 |
+
{
|
33 |
+
"target": "Unification of Fusion Theories, Rules, Filters, Image Fusion and Target\n Tracking Methods (UFT)",
|
34 |
+
"text": " The author has pledged in various papers, conference or seminar\npresentations, and scientific grant applications (between 2004-2015) for the\nunification of fusion theories, combinations of fusion rules, image fusion\nprocedures, filter algorithms, and target tracking methods for more accurate\napplications to our real world problems - since neither fusion theory nor\nfusion rule fully satisfy all needed applications. For each particular\napplication, one selects the most appropriate fusion space and fusion model,\nthen the fusion rules, and the algorithms of implementation. He has worked in\nthe Unification of the Fusion Theories (UFT), which looks like a cooking\nrecipe, better one could say like a logical chart for a computer programmer,\nbut one does not see another method to comprise/unify all things. The\nunification scenario presented herein, which is now in an incipient form,\nshould periodically be updated incorporating new discoveries from the fusion\nand engineering research.\n"
|
35 |
+
},
|
36 |
+
{
|
37 |
+
"target": "Investigation of Variances in Belief Networks",
|
38 |
+
"text": " The belief network is a well-known graphical structure for representing\nindependences in a joint probability distribution. The methods, which perform\nprobabilistic inference in belief networks, often treat the conditional\nprobabilities which are stored in the network as certain values. However, if\none takes either a subjectivistic or a limiting frequency approach to\nprobability, one can never be certain of probability values. An algorithm\nshould not only be capable of reporting the probabilities of the alternatives\nof remaining nodes when other nodes are instantiated; it should also be capable\nof reporting the uncertainty in these probabilities relative to the uncertainty\nin the probabilities which are stored in the network. In this paper a method\nfor determining the variances in inferred probabilities is obtained under the\nassumption that a posterior distribution on the uncertainty variables can be\napproximated by the prior distribution. It is shown that this assumption is\nplausible if their is a reasonable amount of confidence in the probabilities\nwhich are stored in the network. Furthermore in this paper, a surprising upper\nbound for the prior variances in the probabilities of the alternatives of all\nnodes is obtained in the case where the probability distributions of the\nprobabilities of the alternatives are beta distributions. It is shown that the\nprior variance in the probability at an alternative of a node is bounded above\nby the largest variance in an element of the conditional probability\ndistribution for that node.\n"
|
39 |
+
}
|
40 |
+
]
|
41 |
+
```
|
42 |
+
|
43 |
+
### Dataset Fields
|
44 |
+
|
45 |
+
The dataset has the following fields (also called "features"):
|
46 |
+
|
47 |
+
```json
|
48 |
+
{
|
49 |
+
"target": "Value(dtype='string', id=None)",
|
50 |
+
"text": "Value(dtype='string', id=None)"
|
51 |
+
}
|
52 |
+
```
|
53 |
+
|
54 |
+
### Dataset Splits
|
55 |
+
|
56 |
+
This dataset is split into a train and validation split. The split sizes are as follow:
|
57 |
+
|
58 |
+
| Split name | Num samples |
|
59 |
+
| ------------ | ------------------- |
|
60 |
+
| train | 5784 |
|
61 |
+
| valid | 1446 |
|
processed/dataset_dict.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"splits": ["train", "valid"]}
|
processed/train/dataset.arrow
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c93d03a9a774fd14bcf0ca95cc2c2f00a21c65d4328debe3a560581dc442e62b
|
3 |
+
size 6139768
|
processed/train/dataset_info.json
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"builder_name": null,
|
3 |
+
"citation": "",
|
4 |
+
"config_name": null,
|
5 |
+
"dataset_size": null,
|
6 |
+
"description": "AutoNLP generated dataset",
|
7 |
+
"download_checksums": null,
|
8 |
+
"download_size": null,
|
9 |
+
"features": {
|
10 |
+
"target": {
|
11 |
+
"dtype": "string",
|
12 |
+
"id": null,
|
13 |
+
"_type": "Value"
|
14 |
+
},
|
15 |
+
"text": {
|
16 |
+
"dtype": "string",
|
17 |
+
"id": null,
|
18 |
+
"_type": "Value"
|
19 |
+
}
|
20 |
+
},
|
21 |
+
"homepage": "",
|
22 |
+
"license": "",
|
23 |
+
"post_processed": null,
|
24 |
+
"post_processing_size": null,
|
25 |
+
"size_in_bytes": null,
|
26 |
+
"splits": {
|
27 |
+
"train": {
|
28 |
+
"name": "train",
|
29 |
+
"num_bytes": 6137947,
|
30 |
+
"num_examples": 5784,
|
31 |
+
"dataset_name": null
|
32 |
+
}
|
33 |
+
},
|
34 |
+
"supervised_keys": null,
|
35 |
+
"task_templates": null,
|
36 |
+
"version": null
|
37 |
+
}
|
processed/train/state.json
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_data_files": [
|
3 |
+
{
|
4 |
+
"filename": "dataset.arrow"
|
5 |
+
}
|
6 |
+
],
|
7 |
+
"_fingerprint": "8b159b4b6558fd1c",
|
8 |
+
"_format_columns": [
|
9 |
+
"target",
|
10 |
+
"text"
|
11 |
+
],
|
12 |
+
"_format_kwargs": {},
|
13 |
+
"_format_type": null,
|
14 |
+
"_indexes": {},
|
15 |
+
"_indices_data_files": null,
|
16 |
+
"_output_all_columns": false,
|
17 |
+
"_split": null
|
18 |
+
}
|
processed/valid/dataset.arrow
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:fc8a415f8c9ce874cd01debb9acbfcd0ac861518371af8d1a764bb984c1919f3
|
3 |
+
size 1535488
|
processed/valid/dataset_info.json
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"builder_name": null,
|
3 |
+
"citation": "",
|
4 |
+
"config_name": null,
|
5 |
+
"dataset_size": null,
|
6 |
+
"description": "AutoNLP generated dataset",
|
7 |
+
"download_checksums": null,
|
8 |
+
"download_size": null,
|
9 |
+
"features": {
|
10 |
+
"target": {
|
11 |
+
"dtype": "string",
|
12 |
+
"id": null,
|
13 |
+
"_type": "Value"
|
14 |
+
},
|
15 |
+
"text": {
|
16 |
+
"dtype": "string",
|
17 |
+
"id": null,
|
18 |
+
"_type": "Value"
|
19 |
+
}
|
20 |
+
},
|
21 |
+
"homepage": "",
|
22 |
+
"license": "",
|
23 |
+
"post_processed": null,
|
24 |
+
"post_processing_size": null,
|
25 |
+
"size_in_bytes": null,
|
26 |
+
"splits": {
|
27 |
+
"valid": {
|
28 |
+
"name": "valid",
|
29 |
+
"num_bytes": 1534630,
|
30 |
+
"num_examples": 1446,
|
31 |
+
"dataset_name": null
|
32 |
+
}
|
33 |
+
},
|
34 |
+
"supervised_keys": null,
|
35 |
+
"task_templates": null,
|
36 |
+
"version": null
|
37 |
+
}
|
processed/valid/state.json
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_data_files": [
|
3 |
+
{
|
4 |
+
"filename": "dataset.arrow"
|
5 |
+
}
|
6 |
+
],
|
7 |
+
"_fingerprint": "94d4ae37ec74c852",
|
8 |
+
"_format_columns": [
|
9 |
+
"target",
|
10 |
+
"text"
|
11 |
+
],
|
12 |
+
"_format_kwargs": {},
|
13 |
+
"_format_type": null,
|
14 |
+
"_indexes": {},
|
15 |
+
"_indices_data_files": null,
|
16 |
+
"_output_all_columns": false,
|
17 |
+
"_split": null
|
18 |
+
}
|