Datasets:
michelecafagna26
commited on
Commit
β’
ebf1e8c
1
Parent(s):
1ad23ca
Update README.md
Browse files
README.md
CHANGED
@@ -64,7 +64,7 @@ Each axis is collected by asking the following 3 questions:
|
|
64 |
|
65 |
**The high-level descriptions capture the human interpretations of the images**. These interpretations contain abstract concepts not directly linked to physical objects.
|
66 |
Each high-level description is provided with a _confidence score_, crowdsourced by an independent worker measuring the extent to which
|
67 |
-
the high-level description is likely given the corresponding image, question, and caption. The higher the score, the more the high-level caption
|
68 |
|
69 |
- **ποΈ Repository:** [github.com/michelecafagna26/HL-dataset](https://github.com/michelecafagna26/HL-dataset)
|
70 |
- **π Paper:** [HL Dataset: Grounding High-Level Linguistic Concepts in Vision](https://arxiv.org/pdf/2302.12189.pdf)
|
|
|
64 |
|
65 |
**The high-level descriptions capture the human interpretations of the images**. These interpretations contain abstract concepts not directly linked to physical objects.
|
66 |
Each high-level description is provided with a _confidence score_, crowdsourced by an independent worker measuring the extent to which
|
67 |
+
the high-level description is likely given the corresponding image, question, and caption. The higher the score, the more the high-level caption is close to the commonsense (in a Likert scale from 1-5).
|
68 |
|
69 |
- **ποΈ Repository:** [github.com/michelecafagna26/HL-dataset](https://github.com/michelecafagna26/HL-dataset)
|
70 |
- **π Paper:** [HL Dataset: Grounding High-Level Linguistic Concepts in Vision](https://arxiv.org/pdf/2302.12189.pdf)
|