autotrain-data-processor commited on
Commit
6af8474
1 Parent(s): bc36b55

Processed data from AutoTrain data processor ([2022-10-24 10:10 ]

Browse files
README.md ADDED
@@ -0,0 +1,55 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ {}
3
+
4
+ ---
5
+ # AutoTrain Dataset for project: dragino-7-7-max_495m
6
+
7
+ ## Dataset Description
8
+
9
+ This dataset has been automatically processed by AutoTrain for project dragino-7-7-max_495m.
10
+
11
+ ### Languages
12
+
13
+ The BCP-47 code for the dataset's language is unk.
14
+
15
+ ## Dataset Structure
16
+
17
+ ### Data Instances
18
+
19
+ A sample from this dataset looks as follows:
20
+
21
+ ```json
22
+ [
23
+ {
24
+ "feat_rssi": -91,
25
+ "feat_snr": 7.5,
26
+ "target": 125.0
27
+ },
28
+ {
29
+ "feat_rssi": -96,
30
+ "feat_snr": 5.0,
31
+ "target": 125.0
32
+ }
33
+ ]
34
+ ```
35
+
36
+ ### Dataset Fields
37
+
38
+ The dataset has the following fields (also called "features"):
39
+
40
+ ```json
41
+ {
42
+ "feat_rssi": "Value(dtype='int64', id=None)",
43
+ "feat_snr": "Value(dtype='float64', id=None)",
44
+ "target": "Value(dtype='float32', id=None)"
45
+ }
46
+ ```
47
+
48
+ ### Dataset Splits
49
+
50
+ This dataset is split into a train and validation split. The split sizes are as follow:
51
+
52
+ | Split name | Num samples |
53
+ | ------------ | ------------------- |
54
+ | train | 853 |
55
+ | valid | 286 |
processed/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train", "valid"]}
processed/train/dataset.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3ea3f06f70fd55abb11d87589a98670560d4c456f85e8772c675f4d5208684bd
3
+ size 17936
processed/train/dataset_info.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "builder_name": null,
3
+ "citation": "",
4
+ "config_name": null,
5
+ "dataset_size": null,
6
+ "description": "AutoTrain generated dataset",
7
+ "download_checksums": null,
8
+ "download_size": null,
9
+ "features": {
10
+ "feat_rssi": {
11
+ "dtype": "int64",
12
+ "id": null,
13
+ "_type": "Value"
14
+ },
15
+ "feat_snr": {
16
+ "dtype": "float64",
17
+ "id": null,
18
+ "_type": "Value"
19
+ },
20
+ "target": {
21
+ "dtype": "float32",
22
+ "id": null,
23
+ "_type": "Value"
24
+ }
25
+ },
26
+ "homepage": "",
27
+ "license": "",
28
+ "post_processed": null,
29
+ "post_processing_size": null,
30
+ "size_in_bytes": null,
31
+ "splits": {
32
+ "train": {
33
+ "name": "train",
34
+ "num_bytes": 17167,
35
+ "num_examples": 853,
36
+ "dataset_name": null
37
+ }
38
+ },
39
+ "supervised_keys": null,
40
+ "task_templates": null,
41
+ "version": null
42
+ }
processed/train/state.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "dataset.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "884c3b4cb4aa4974",
8
+ "_format_columns": [
9
+ "feat_rssi",
10
+ "feat_snr",
11
+ "target"
12
+ ],
13
+ "_format_kwargs": {},
14
+ "_format_type": null,
15
+ "_indexes": {},
16
+ "_output_all_columns": false,
17
+ "_split": null
18
+ }
processed/valid/dataset.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2da110e69011e807e2e60accc0c670289b14c040f5f585703c7e04c2121690a5
3
+ size 6520
processed/valid/dataset_info.json ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "builder_name": null,
3
+ "citation": "",
4
+ "config_name": null,
5
+ "dataset_size": null,
6
+ "description": "AutoTrain generated dataset",
7
+ "download_checksums": null,
8
+ "download_size": null,
9
+ "features": {
10
+ "feat_rssi": {
11
+ "dtype": "int64",
12
+ "id": null,
13
+ "_type": "Value"
14
+ },
15
+ "feat_snr": {
16
+ "dtype": "float64",
17
+ "id": null,
18
+ "_type": "Value"
19
+ },
20
+ "target": {
21
+ "dtype": "float32",
22
+ "id": null,
23
+ "_type": "Value"
24
+ }
25
+ },
26
+ "homepage": "",
27
+ "license": "",
28
+ "post_processed": null,
29
+ "post_processing_size": null,
30
+ "size_in_bytes": null,
31
+ "splits": {
32
+ "valid": {
33
+ "name": "valid",
34
+ "num_bytes": 5756,
35
+ "num_examples": 286,
36
+ "dataset_name": null
37
+ }
38
+ },
39
+ "supervised_keys": null,
40
+ "task_templates": null,
41
+ "version": null
42
+ }
processed/valid/state.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "dataset.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "59bf6a78af4f70f1",
8
+ "_format_columns": [
9
+ "feat_rssi",
10
+ "feat_snr",
11
+ "target"
12
+ ],
13
+ "_format_kwargs": {},
14
+ "_format_type": null,
15
+ "_indexes": {},
16
+ "_output_all_columns": false,
17
+ "_split": null
18
+ }