---
license: creativeml-openrail-m
tags:
- pytorch
- diffusers
- stable-diffusion
- text-to-image
- diffusion-models-class
- dreambooth-hackathon
- landscape
widget:
- text: a photo of ppaine landscape at night, NIKON Z FX
---
# Dreambooth Hackaton 23': How can we use a text-to-image generative model to explore the cinematographic appeal of Torres del Paine ๐จ๐ฑ?
> _Torres del Paine National Park is a national park encompassing mountains, glaciers, lakes, and rivers in southern Chilean Patagonia._
> _It is also part of the End of the World Route, a tourist scenic route. [Wikipedia](https://en.wikipedia.org/wiki/Torres_del_Paine_National_Park)_
- Reddit post: [Dreambooth Hackaton: How can we use a text-to-image model to explore the cinematographic appeal of Torres del Paine ๐จ๐ฑ?](https://www.reddit.com/r/StableDiffusion/comments/109fjdu/dreambooth_hackaton_how_can_we_use_a_texttoimage/)
## Description
DreamBooth model for the ppaine concept trained by alkzar90 on the alkzar90/torres-del-paine dataset.
This is a Stable Diffusion model fine-tuned on the ppaine concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of ppaine landscape**
This model was created as part of the DreamBooth Hackathon ๐ฅ. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
This is a Stable Diffusion model fine-tuned on `landscape` images for the landscape theme.
## Cinematographics rendering & Object/Artifacts insertion
### Animal Statues
## Director's eye view
What does the director's cut concept mean? The definition by the [Merriam-Webster dictionary](https://www.merriam-webster.com/dictionary/director%27s%20cut#:~:text=noun,version%20created%20for%20general%20distribution) is: _"a version of a motion picture that is edited according to the director's wishes and that usually includes scenes cut from the version created for general distribution"_.
## Artistic Style Transfer
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('alkzar90/ppaine-landscape')
image = pipeline().images[0]
image
```
## References
* [DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation (Ruiz et al. 2022)](https://arxiv.org/abs/2208.12242})
* [High-Resolution Image Synthesis with Latent Diffusion Models (Rombach et al., 2022 )](https://arxiv.org/abs/2112.10752)
* [Training Stable Diffusion with Dreambooth using ๐งจ Diffusers (Post)](https://huggingface.co/blog/dreambooth)
* [Hugging Face DreamBooth Hackathon](https://github.com/huggingface/diffusion-models-class/tree/main/hackathon)
## Thanks to John Whitaker and Lewis Tunstall
Thanks to [John Whitaker](https://github.com/johnowhitaker) and [Lewis Tunstall](https://github.com/lewtun) for writing out and describing the initial hackathon parameters at https://huggingface.co/dreambooth-hackathon.