autotrain-data-processor commited on
Commit
b06a316
1 Parent(s): c41544e

Processed data from AutoTrain data processor ([2023-06-14 22:00 ]

Browse files
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - ar
5
+ task_categories:
6
+ - translation
7
+
8
+ ---
9
+ # AutoTrain Dataset for project: fhdd_arabic_chatbot
10
+
11
+ ## Dataset Description
12
+
13
+ This dataset has been automatically processed by AutoTrain for project fhdd_arabic_chatbot.
14
+
15
+ ### Languages
16
+
17
+ The BCP-47 code for the dataset's language is en2ar.
18
+
19
+ ## Dataset Structure
20
+
21
+ ### Data Instances
22
+
23
+ A sample from this dataset looks as follows:
24
+
25
+ ```json
26
+ [
27
+ {
28
+ "feat_sourceLang": "ara",
29
+ "feat_targetlang": "eng",
30
+ "target": "\u064a\u0646\u0628\u063a\u064a \u0623\u0646 \u062a\u064f\u0638\u0647\u0631 \u0627\u0644\u0646\u0651\u0633\u0627\u0621 \u0648\u062c\u0648\u0647\u0647\u0646\u0651.",
31
+ "source": "Women should have their faces visible."
32
+ },
33
+ {
34
+ "feat_sourceLang": "ara",
35
+ "feat_targetlang": "eng",
36
+ "target": "\u0623\u062a\u062f\u0631\u0633 \u0627\u0644\u0625\u0646\u062c\u0644\u064a\u0632\u064a\u0629\u061f",
37
+ "source": "Do you study English?"
38
+ }
39
+ ]
40
+ ```
41
+
42
+ ### Dataset Fields
43
+
44
+ The dataset has the following fields (also called "features"):
45
+
46
+ ```json
47
+ {
48
+ "feat_sourceLang": "Value(dtype='string', id=None)",
49
+ "feat_targetlang": "Value(dtype='string', id=None)",
50
+ "target": "Value(dtype='string', id=None)",
51
+ "source": "Value(dtype='string', id=None)"
52
+ }
53
+ ```
54
+
55
+ ### Dataset Splits
56
+
57
+ This dataset is split into a train and validation split. The split sizes are as follow:
58
+
59
+ | Split name | Num samples |
60
+ | ------------ | ------------------- |
61
+ | train | 15622 |
62
+ | valid | 3906 |
processed/dataset_dict.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"splits": ["train", "valid"]}
processed/train/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a4b57eda4302fd3eb982c57b9edc17eab377984ced9265b5f14356053699c5cd
3
+ size 1610096
processed/train/dataset_info.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "AutoTrain generated dataset",
4
+ "features": {
5
+ "feat_sourceLang": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "feat_targetlang": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ },
13
+ "target": {
14
+ "dtype": "string",
15
+ "_type": "Value"
16
+ },
17
+ "source": {
18
+ "dtype": "string",
19
+ "_type": "Value"
20
+ }
21
+ },
22
+ "homepage": "",
23
+ "license": "",
24
+ "splits": {
25
+ "train": {
26
+ "name": "train",
27
+ "num_bytes": 1603243,
28
+ "num_examples": 15622,
29
+ "dataset_name": null
30
+ }
31
+ }
32
+ }
processed/train/state.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "9d8f472c5e6090f9",
8
+ "_format_columns": [
9
+ "feat_sourceLang",
10
+ "feat_targetlang",
11
+ "source",
12
+ "target"
13
+ ],
14
+ "_format_kwargs": {},
15
+ "_format_type": null,
16
+ "_output_all_columns": false,
17
+ "_split": null
18
+ }
processed/valid/data-00000-of-00001.arrow ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:004f1a5528e62e6c71dae1b83ca027a7c7278049df90e862fcd44b123a28659f
3
+ size 410848
processed/valid/dataset_info.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "citation": "",
3
+ "description": "AutoTrain generated dataset",
4
+ "features": {
5
+ "feat_sourceLang": {
6
+ "dtype": "string",
7
+ "_type": "Value"
8
+ },
9
+ "feat_targetlang": {
10
+ "dtype": "string",
11
+ "_type": "Value"
12
+ },
13
+ "target": {
14
+ "dtype": "string",
15
+ "_type": "Value"
16
+ },
17
+ "source": {
18
+ "dtype": "string",
19
+ "_type": "Value"
20
+ }
21
+ },
22
+ "homepage": "",
23
+ "license": "",
24
+ "splits": {
25
+ "valid": {
26
+ "name": "valid",
27
+ "num_bytes": 408689,
28
+ "num_examples": 3906,
29
+ "dataset_name": null
30
+ }
31
+ }
32
+ }
processed/valid/state.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_data_files": [
3
+ {
4
+ "filename": "data-00000-of-00001.arrow"
5
+ }
6
+ ],
7
+ "_fingerprint": "771a33e09c2f7399",
8
+ "_format_columns": [
9
+ "feat_sourceLang",
10
+ "feat_targetlang",
11
+ "source",
12
+ "target"
13
+ ],
14
+ "_format_kwargs": {},
15
+ "_format_type": null,
16
+ "_output_all_columns": false,
17
+ "_split": null
18
+ }