boyuanzheng010 commited on
Commit
f27b636
1 Parent(s): 1e98bc5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -59,7 +59,9 @@ dataset_info:
59
 
60
  ### Dataset Summary
61
 
62
- Multimodal-Mind2Web is the multimodal version of [Mind2Web](https://osu-nlp-group.github.io/Mind2Web/), a dataset for developing and evaluating generalist agents for the web that can follow language instructions to complete complex tasks on any website. In this dataset, we align each HTML document in the dataset with its corresponding webpage screenshot image from the Mind2Web raw dump, which undergoes human verification to confirm element visibility and correct rendering for action prediction.
 
 
63
 
64
  ## Dataset Structure
65
 
@@ -69,19 +71,18 @@ Multimodal-Mind2Web is the multimodal version of [Mind2Web](https://osu-nlp-grou
69
  - test_website: 1019 actions from 142 tasks. Websites are not seen during training.
70
  - test_domain: 4060 actions from 694 tasks. Entire domains are not seen during training.
71
 
72
- The three test set splits have went through human verification to confirm element visibility and correct rendering for action prediction.
73
 
74
- Each line in the dataset is an action consisting of screenshot image, HTML text and other fields required for action prediction, for the convenience of inference.
75
 
76
  ### Data Fields
77
-
78
  - "annotation_id" (str): unique id for each task
79
  - "website" (str): website name
80
  - "domain" (str): website domain
81
  - "subdomain" (str): website subdomain
82
  - "confirmed_task" (str): task description
83
  - "action_reprs" (list[str]): human readable string representation of the action sequence
84
- - "screenshot" (str): path to the webpage screenshot image corresponding to the HTML.
85
  - "action_uid" (str): unique id for each action (step)
86
  - "raw_html" (str): raw html of the page before the action is performed
87
  - "cleaned_html" (str): cleaned html of the page before the action is performed
 
59
 
60
  ### Dataset Summary
61
 
62
+ Multimodal-Mind2Web is the multimodal version of [Mind2Web](https://osu-nlp-group.github.io/Mind2Web/), a dataset for developing and evaluating generalist agents
63
+ for the web that can follow language instructions to complete complex tasks on any website. In this dataset, we align each HTML document in the dataset with
64
+ its corresponding webpage screenshot image from the Mind2Web raw dump. This multimodal version addresses the inconvenience of loading images from the ~300GB Mind2Web Raw Dump.
65
 
66
  ## Dataset Structure
67
 
 
71
  - test_website: 1019 actions from 142 tasks. Websites are not seen during training.
72
  - test_domain: 4060 actions from 694 tasks. Entire domains are not seen during training.
73
 
74
+ The **_train_** set may include some screenshot images not properly rendered caused by rendering issues during Mind2Web annotation. The three **_test splits (test_task, test_website, test_domain)_** have undergone human verification to confirm element visibility and correct rendering for action prediction.
75
 
 
76
 
77
  ### Data Fields
78
+ Each line in the dataset is an action consisting of screenshot image, HTML text and other fields required for action prediction, for the convenience of inference.
79
  - "annotation_id" (str): unique id for each task
80
  - "website" (str): website name
81
  - "domain" (str): website domain
82
  - "subdomain" (str): website subdomain
83
  - "confirmed_task" (str): task description
84
  - "action_reprs" (list[str]): human readable string representation of the action sequence
85
+ - **"screenshot" (str): path to the webpage screenshot image corresponding to the HTML.**
86
  - "action_uid" (str): unique id for each action (step)
87
  - "raw_html" (str): raw html of the page before the action is performed
88
  - "cleaned_html" (str): cleaned html of the page before the action is performed