flaviagiammarino commited on
Commit
fdcdd3e
1 Parent(s): 68167c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -47,7 +47,7 @@ publishers and authors of these two books, and the owners of the PEIR digital li
47
  **Leaderboard:** [Papers with Code Leaderboard](https://paperswithcode.com/sota/medical-visual-question-answering-on-pathvqa)
48
 
49
  ### Dataset Summary
50
- The data was obtained from the updated Google Drive link shared by the authors on Feb 15, 2023,
51
  see the [commit](https://github.com/UCSD-AI4H/PathVQA/commit/117e7f4ef88a0e65b0e7f37b98a73d6237a3ceab)
52
  in the GitHub repository. This version of the dataset contains a total of 5,004 images and 32,795 question-answer pairs.
53
  Out of the 5,004 images, 4,289 images are referenced by a question-answer pair, while 715 images are not used.
@@ -55,7 +55,7 @@ There are a few image-question-answer triplets which occur more than once in the
55
  After dropping the duplicate image-question-answer triplets, the dataset contains 32,632 question-answer pairs on 4,289 images.
56
 
57
  #### Supported Tasks and Leaderboards
58
- This dataset has an active leaderboard on [Papers with Code](https://paperswithcode.com/sota/medical-visual-question-answering-on-pathvqa)
59
  where models are ranked based on three metrics: "Yes/No Accuracy", "Free-form accuracy" and "Overall accuracy". "Yes/No Accuracy" is
60
  the accuracy of a model's generated answers for the subset of binary "yes/no" questions. "Free-form accuracy" is the accuracy
61
  of a model's generated answers for the subset of open-ended questions. "Overall accuracy" is the accuracy of a model's generated
 
47
  **Leaderboard:** [Papers with Code Leaderboard](https://paperswithcode.com/sota/medical-visual-question-answering-on-pathvqa)
48
 
49
  ### Dataset Summary
50
+ The dataset was obtained from the updated Google Drive link shared by the authors on Feb 15, 2023,
51
  see the [commit](https://github.com/UCSD-AI4H/PathVQA/commit/117e7f4ef88a0e65b0e7f37b98a73d6237a3ceab)
52
  in the GitHub repository. This version of the dataset contains a total of 5,004 images and 32,795 question-answer pairs.
53
  Out of the 5,004 images, 4,289 images are referenced by a question-answer pair, while 715 images are not used.
 
55
  After dropping the duplicate image-question-answer triplets, the dataset contains 32,632 question-answer pairs on 4,289 images.
56
 
57
  #### Supported Tasks and Leaderboards
58
+ The PathVQA dataset has an active leaderboard on [Papers with Code](https://paperswithcode.com/sota/medical-visual-question-answering-on-pathvqa)
59
  where models are ranked based on three metrics: "Yes/No Accuracy", "Free-form accuracy" and "Overall accuracy". "Yes/No Accuracy" is
60
  the accuracy of a model's generated answers for the subset of binary "yes/no" questions. "Free-form accuracy" is the accuracy
61
  of a model's generated answers for the subset of open-ended questions. "Overall accuracy" is the accuracy of a model's generated