Spaces:
Running
Running
Tony Wu
commited on
Commit
•
6db62ef
1
Parent(s):
bc3b144
docs: add note in model submission markdown about repo name casing
Browse files
app.py
CHANGED
@@ -5,10 +5,10 @@ from data.model_handler import ModelHandler
|
|
5 |
|
6 |
METRICS = ["ndcg_at_5", "recall_at_1"]
|
7 |
|
8 |
-
def main():
|
9 |
model_handler = ModelHandler()
|
10 |
initial_metric = "ndcg_at_5"
|
11 |
-
|
12 |
data = model_handler.get_vidore_data(initial_metric)
|
13 |
data = add_rank_and_format(data)
|
14 |
|
@@ -48,7 +48,7 @@ def main():
|
|
48 |
gr.Markdown(
|
49 |
"""
|
50 |
Visual Document Retrieval Benchmark leaderboard. To submit results, refer to the corresponding tab.
|
51 |
-
|
52 |
Refer to the [ColPali paper](https://arxiv.org/abs/2407.01449) for details on metrics, tasks and models.
|
53 |
"""
|
54 |
)
|
@@ -125,9 +125,10 @@ def main():
|
|
125 |
|
126 |
1. **Evaluate your model**:
|
127 |
- Follow the evaluation script provided in the [ViDoRe GitHub repository](https://github.com/illuin-tech/vidore-benchmark/)
|
128 |
-
|
129 |
2. **Format your submission file**:
|
130 |
-
- The submission file should automatically be generated, and named `results.json` with the
|
|
|
131 |
```json
|
132 |
{
|
133 |
"dataset_name_1": {
|
@@ -142,13 +143,19 @@ def main():
|
|
142 |
},
|
143 |
}
|
144 |
```
|
145 |
-
- The dataset names should be the same as the ViDoRe dataset names listed in the following
|
146 |
-
|
|
|
147 |
3. **Submit your model**:
|
148 |
- Create a public HuggingFace model repository with your model.
|
149 |
-
- Add the tag `vidore` to your model in the metadata of the model card and place the
|
150 |
-
|
151 |
-
|
|
|
|
|
|
|
|
|
|
|
152 |
"""
|
153 |
)
|
154 |
|
|
|
5 |
|
6 |
METRICS = ["ndcg_at_5", "recall_at_1"]
|
7 |
|
8 |
+
def main():
|
9 |
model_handler = ModelHandler()
|
10 |
initial_metric = "ndcg_at_5"
|
11 |
+
|
12 |
data = model_handler.get_vidore_data(initial_metric)
|
13 |
data = add_rank_and_format(data)
|
14 |
|
|
|
48 |
gr.Markdown(
|
49 |
"""
|
50 |
Visual Document Retrieval Benchmark leaderboard. To submit results, refer to the corresponding tab.
|
51 |
+
|
52 |
Refer to the [ColPali paper](https://arxiv.org/abs/2407.01449) for details on metrics, tasks and models.
|
53 |
"""
|
54 |
)
|
|
|
125 |
|
126 |
1. **Evaluate your model**:
|
127 |
- Follow the evaluation script provided in the [ViDoRe GitHub repository](https://github.com/illuin-tech/vidore-benchmark/)
|
128 |
+
|
129 |
2. **Format your submission file**:
|
130 |
+
- The submission file should automatically be generated, and named `results.json` with the
|
131 |
+
following structure:
|
132 |
```json
|
133 |
{
|
134 |
"dataset_name_1": {
|
|
|
143 |
},
|
144 |
}
|
145 |
```
|
146 |
+
- The dataset names should be the same as the ViDoRe dataset names listed in the following
|
147 |
+
collection: [ViDoRe Benchmark](https://huggingface.co/collections/vidore/vidore-benchmark-667173f98e70a1c0fa4db00d).
|
148 |
+
|
149 |
3. **Submit your model**:
|
150 |
- Create a public HuggingFace model repository with your model.
|
151 |
+
- Add the tag `vidore` to your model in the metadata of the model card and place the
|
152 |
+
`results.json` file at the root.
|
153 |
+
|
154 |
+
And you're done! Your model will appear on the leaderboard when you click refresh! Once the space
|
155 |
+
gets rebooted, it will appear on startup.
|
156 |
+
|
157 |
+
Note: For proper hyperlink redirection, please ensure that your model repository name is in
|
158 |
+
kebab-case, e.g. `my-model-name`.
|
159 |
"""
|
160 |
)
|
161 |
|