Update README.md
Browse files
README.md
CHANGED
@@ -55,38 +55,27 @@ model-index:
|
|
55 |
value: 5.9
|
56 |
---
|
57 |
|
58 |
-
|
59 |
-
More details to be released upon publication (\<tbr\>).
|
60 |
-
Everything is based on the OSV-5M benchmark dataset.
|
61 |
|
62 |
-
|
63 |
|
64 |
-
|
|
|
|
|
|
|
65 |
|
66 |
-
|
|
|
|
|
|
|
|
|
67 |
|
68 |
-
|
|
|
69 |
|
70 |
-
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
### Model Sources [optional]
|
75 |
-
|
76 |
-
<!-- Provide the basic links for the model. -->
|
77 |
-
|
78 |
-
- **Repository:** \<tbr\>
|
79 |
-
- **Paper:** \<tbr\>
|
80 |
-
- **Human Evaluation** \<tbr\>
|
81 |
-
|
82 |
-
## Usage
|
83 |
-
|
84 |
-
The main purpose of this model is academic usage. We provide a hugging face repo both to facilitate accessing and run inference to our model.
|
85 |
-
|
86 |
-
### Example usage
|
87 |
-
|
88 |
-
First download the repo `!git clone <tbr>`.
|
89 |
-
Then from any script whose `cwd` is the repos main directory (`cd <tbr>`) run:
|
90 |
|
91 |
```python
|
92 |
from PIL import Image
|
@@ -96,4 +85,24 @@ geoloc = Geolocalizer.from_pretrained('osv5m/baseline')
|
|
96 |
img = Image.open('.media/examples/img1.jpeg')
|
97 |
x = geoloc.transform(img).unsqueeze(0) # transform the image using our dedicated transformer
|
98 |
gps = geoloc(x) # B, 2 (lat, lon - tensor in rad)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
99 |
```
|
|
|
55 |
value: 5.9
|
56 |
---
|
57 |
|
58 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/654bb2591a9e65ef2598d8c4/0Z-GMa6SSLgXFmrplC0WD.png)
|
|
|
|
|
59 |
|
60 |
+
# OpenStreetView-5M <br><sub>The Many Roads to Global Visual Geolocation 📍🌍</sub>
|
61 |
|
62 |
+
**First authors:** [Guillaume Astruc](https://gastruc.github.io/), [Nicolas Dufour](https://nicolas-dufour.github.io/), [Ioannis Siglidis](https://imagine.enpc.fr/~siglidii/)
|
63 |
+
**Second authors:** [Constantin Aronssohn](), Nacim Bouia, [Stephanie Fu](https://stephanie-fu.github.io/), [Romain Loiseau](https://romainloiseau.fr/), [Van Nguyen Nguyen](https://nv-nguyen.github.io/), [Charles Raude](https://imagine.enpc.fr/~raudec/), [Elliot Vincent](https://imagine.enpc.fr/~vincente/), Lintao XU, Hongyu Zhou
|
64 |
+
**Last author:** [Loic Landrieu](https://loiclandrieu.com/)
|
65 |
+
**Research Institute:** [Imagine](https://imagine.enpc.fr/), _LIGM, Ecole des Ponts, Univ Gustave Eiffel, CNRS, Marne-la-Vallée, France_
|
66 |
|
67 |
+
## Introduction 🌍
|
68 |
+
[OpenStreetView-5M](https://huggingface.co/datasets/osv5m/osv5m) is the first large-scale open geolocation benchmark of streetview images.
|
69 |
+
To get a sense of the difficulty of the benchmark, you can play our [demo](https://huggingface.co/spaces/osv5m/plonk).
|
70 |
+
Our dataset was used in an extensive benchmark of which we provide the best model.
|
71 |
+
For more details and results, please check out our [paper](arxiv) and [project page](https://imagine.enpc.fr/~guillaume-astruc/osv-5m).
|
72 |
|
73 |
+
### Inference 🔥
|
74 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/654bb2591a9e65ef2598d8c4/mmTZy5ELTwLiLap8pO4xV.png)
|
75 |
|
76 |
+
Our best model on OSV-5M can also be found on [huggingface](https://huggingface.co/osv5m/baseline).
|
77 |
+
First download the repo `!git clone https://github.com/gastruc/osv5m`.
|
78 |
+
Then from any script whose `cwd` is the repos main directory (`cd osv5m`) run:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
79 |
|
80 |
```python
|
81 |
from PIL import Image
|
|
|
85 |
img = Image.open('.media/examples/img1.jpeg')
|
86 |
x = geoloc.transform(img).unsqueeze(0) # transform the image using our dedicated transformer
|
87 |
gps = geoloc(x) # B, 2 (lat, lon - tensor in rad)
|
88 |
+
```
|
89 |
+
|
90 |
+
To reproduce results for this model, run:
|
91 |
+
|
92 |
+
```bash
|
93 |
+
python evaluation.py exp=eval_best_model dataset.global_batch_size=1024
|
94 |
+
```
|
95 |
+
|
96 |
+
### Citing 💫
|
97 |
+
|
98 |
+
```bibtex
|
99 |
+
@article{osv5m,
|
100 |
+
title = {{OpenStreetView-5M}: {T}he Many Roads to Global Visual Geolocation},
|
101 |
+
author = {Astruc, Guillaume and Dufour, Nicolas and Siglidis, Ioannis
|
102 |
+
and Aronssohn, Constantin and Bouia, Nacim and Fu, Stephanie and Loiseau, Romain
|
103 |
+
and Nguyen, Van Nguyen and Raude, Charles and Vincent, Elliot and Xu, Lintao
|
104 |
+
and Zhou, Hongyu and Landrieu, Loic},
|
105 |
+
journal = {CVPR},
|
106 |
+
year = {2024},
|
107 |
+
}
|
108 |
```
|