kcz358 commited on
Commit
073ed4f
1 Parent(s): 0d6d904

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -2
README.md CHANGED
@@ -23,6 +23,41 @@ configs:
23
  - split: test
24
  path: data/test-*
25
  ---
26
- # Dataset Card for "flickr30k"
27
 
28
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
  - split: test
24
  path: data/test-*
25
  ---
 
26
 
27
+
28
+
29
+ <p align="center" width="100%">
30
+ <img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%">
31
+ </p>
32
+
33
+ # Large-scale Multi-modality Models Evaluation Suite
34
+
35
+ > Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval`
36
+
37
+ 🏠 [Homepage](https://lmms-lab.github.io/) | 📚 [Documentation](docs/README.md) | 🤗 [Huggingface Datasets](https://huggingface.co/lmms-lab)
38
+
39
+ # This Dataset
40
+
41
+ This is a formatted version of [CMMMU](https://cmmmu-benchmark.github.io/). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models.
42
+
43
+ ```
44
+ @article{young-etal-2014-image,
45
+ title = "From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions",
46
+ author = "Young, Peter and
47
+ Lai, Alice and
48
+ Hodosh, Micah and
49
+ Hockenmaier, Julia",
50
+ editor = "Lin, Dekang and
51
+ Collins, Michael and
52
+ Lee, Lillian",
53
+ journal = "Transactions of the Association for Computational Linguistics",
54
+ volume = "2",
55
+ year = "2014",
56
+ address = "Cambridge, MA",
57
+ publisher = "MIT Press",
58
+ url = "https://aclanthology.org/Q14-1006",
59
+ doi = "10.1162/tacl_a_00166",
60
+ pages = "67--78",
61
+ abstract = "We propose to use the visual denotations of linguistic expressions (i.e. the set of images they describe) to define novel denotational similarity metrics, which we show to be at least as beneficial as distributional similarities for two tasks that require semantic inference. To compute these denotational similarities, we construct a denotation graph, i.e. a subsumption hierarchy over constituents and their denotations, based on a large corpus of 30K images and 150K descriptive captions.",
62
+ }
63
+ ```