MVRL
/

File size: 2,246 Bytes
bdbdbc1
 
d33122b
5a05364
73bf247
 
 
d33122b
 
 
 
 
f5dc635
bdbdbc1
 
 
 
 
5a05364
bdbdbc1
5a05364
 
bdbdbc1
 
 
 
 
5a05364
 
bdbdbc1
5a05364
bdbdbc1
5a05364
93dff8a
5a05364
 
bdbdbc1
5a05364
bdbdbc1
5a05364
bdbdbc1
93dff8a
 
bdbdbc1
5a05364
 
 
 
 
 
 
bdbdbc1
5a05364
 
bdbdbc1
5a05364
bdbdbc1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5a05364
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
library_name: diffusers
base_model: stabilityai/stable-diffusion-2-1-base
license: apache-2.0
widget:
- src: osm_tile_18_42048_101323.jpeg
  prompt: Satellite image features a city neighborhood
tags:
- controlnet
- stable-diffusion
- satellite-imagery
- OSM
pipeline_tag: image-to-image
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->
This is a ControlNet based model that synthesizes satellite images given OpenStreetMap Images. The base stable diffusion model used is [stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) (v2-1_512-ema-pruned.ckpt).

  * Use it with 🧨 [diffusers](#examples)
  * Use it with [controlnet](https://github.com/lllyasviel/ControlNet/tree/main?tab=readme-ov-file) repository

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** [stable-diffusion](https://huggingface.co/stabilityai/stable-diffusion-2-1-base)
- **Paper:** [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543)

## Examples

```python
from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
import torch
from PIL import Image

img = Image.open("osm_tile_18_42048_101323.jpeg")

controlnet = ControlNetModel.from_pretrained("MVRL/GeoSynth-OSM")

pipe = StableDiffusionControlNetPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-base", controlnet=controlnet)
pipe = pipe.to("cuda:0")

# generate image
generator = torch.manual_seed(10345340)
image = pipe(
    "Satellite image features a city neighborhood",
    generator=generator,
    image=img,
).images[0]

image.save("generated_city.jpg")
```

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

## Citation [optional]

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

[More Information Needed]

**APA:**

[More Information Needed]

## More Information [optional]

[More Information Needed]

## Model Card Authors [optional]

[More Information Needed]

## Model Card Contact

[More Information Needed]