Update README.md
#4
by
scelebi
- opened
README.md
CHANGED
@@ -1,90 +1,37 @@
|
|
1 |
-
|
2 |
-
license: mit
|
3 |
-
prior:
|
4 |
-
- warp-diffusion/wuerstchen-prior
|
5 |
-
tags:
|
6 |
-
- text-to-image
|
7 |
-
- wuerstchen
|
8 |
-
---
|
9 |
|
10 |
-
|
11 |
|
12 |
-
|
13 |
-
Würstchen is a diffusion model, whose text-conditional model works in a highly compressed latent space of images. Why is this important? Compressing data can reduce
|
14 |
-
computational costs for both training and inference by magnitudes. Training on 1024x1024 images, is way more expensive than training at 32x32. Usually, other works make
|
15 |
-
use of a relatively small compression, in the range of 4x - 8x spatial compression. Würstchen takes this to an extreme. Through its novel design, we achieve a 42x spatial
|
16 |
-
compression. This was unseen before because common methods fail to faithfully reconstruct detailed images after 16x spatial compression. Würstchen employs a
|
17 |
-
two-stage compression, what we call Stage A and Stage B. Stage A is a VQGAN, and Stage B is a Diffusion Autoencoder (more details can be found in the [paper](https://arxiv.org/abs/2306.00637)).
|
18 |
-
A third model, Stage C, is learned in that highly compressed latent space. This training requires fractions of the compute used for current top-performing models, allowing
|
19 |
-
also cheaper and faster inference.
|
20 |
|
21 |
-
|
22 |
-
|
23 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
24 |
|
25 |
-
|
26 |
-
us humans when looking at faces, hands, etc. We are working on making these reconstructions even better in the future!
|
27 |
|
28 |
-
|
29 |
-
Würstchen was trained on image resolutions between 1024x1024 & 1536x1536. We sometimes also observe good outputs at resolutions like 1024x2048. Feel free to try it out.
|
30 |
-
We also observed that the Prior (Stage C) adapts extremely fast to new resolutions. So finetuning it at 2048x2048 should be computationally cheap.
|
31 |
-
<img src="https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/5pA5KUfGmvsObqiIjdGY1.jpeg" width=1000>
|
32 |
|
33 |
-
|
34 |
-
This pipeline should be run together with a prior https://huggingface.co/warp-ai/wuerstchen-prior:
|
35 |
|
36 |
-
|
37 |
-
import torch
|
38 |
-
from diffusers import AutoPipelineForText2Image
|
39 |
|
40 |
-
|
41 |
-
dtype = torch.float16
|
42 |
|
43 |
-
|
44 |
-
"warp-diffusion/wuerstchen", torch_dtype=dtype
|
45 |
-
).to(device)
|
46 |
|
47 |
-
|
48 |
|
49 |
-
|
50 |
-
prompt=caption,
|
51 |
-
height=1024,
|
52 |
-
width=1024,
|
53 |
-
prior_guidance_scale=4.0,
|
54 |
-
decoder_guidance_scale=0.0,
|
55 |
-
).images
|
56 |
-
```
|
57 |
-
|
58 |
-
### Image Sampling Times
|
59 |
-
The figure shows the inference times (on an A100) for different batch sizes (`num_images_per_prompt`) on Würstchen compared to [Stable Diffusion XL](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) (without refiner).
|
60 |
-
The left figure shows inference times (using torch > 2.0), whereas the right figure applies `torch.compile` to both pipelines in advance.
|
61 |
-
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/634cb5eefb80cc6bcaf63c3e/UPhsIH2f079ZuTA_sLdVe.jpeg)
|
62 |
-
|
63 |
-
## Model Details
|
64 |
-
- **Developed by:** Pablo Pernias, Dominic Rampas
|
65 |
-
- **Model type:** Diffusion-based text-to-image generation model
|
66 |
-
- **Language(s):** English
|
67 |
-
- **License:** MIT
|
68 |
-
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a Diffusion model in the style of Stage C from the [Würstchen paper](https://arxiv.org/abs/2306.00637) that uses a fixed, pretrained text encoder ([CLIP ViT-bigG/14](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)).
|
69 |
-
- **Resources for more information:** [GitHub Repository](https://github.com/dome272/Wuerstchen), [Paper](https://arxiv.org/abs/2306.00637).
|
70 |
-
- **Cite as:**
|
71 |
-
|
72 |
-
@misc{pernias2023wuerstchen,
|
73 |
-
title={Wuerstchen: Efficient Pretraining of Text-to-Image Models},
|
74 |
-
author={Pablo Pernias and Dominic Rampas and Marc Aubreville},
|
75 |
-
year={2023},
|
76 |
-
eprint={2306.00637},
|
77 |
-
archivePrefix={arXiv},
|
78 |
-
primaryClass={cs.CV}
|
79 |
-
}
|
80 |
-
|
81 |
-
## Environmental Impact
|
82 |
-
|
83 |
-
**Würstchen v2** **Estimated Emissions**
|
84 |
-
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
|
85 |
-
|
86 |
-
- **Hardware Type:** A100 PCIe 40GB
|
87 |
-
- **Hours used:** 24602
|
88 |
-
- **Cloud Provider:** AWS
|
89 |
-
- **Compute Region:** US-east
|
90 |
-
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 2275.68 kg CO2 eq.
|
|
|
1 |
+
Multi-Indicator 4-Hour Trading Strategy
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
|
3 |
+
Objective: The aim of this strategy is to identify potential buying and selling opportunities using various technical indicators on a 4-hour time frame.
|
4 |
|
5 |
+
Requirements:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
|
7 |
+
1. A trading platform capable of using a 4-hour time frame.
|
8 |
+
2. The following technical indicators:
|
9 |
+
- Average Directional Index (ADX)
|
10 |
+
- Directional Movement Index (DMI)
|
11 |
+
- Chaikin Accumulation/Distribution Oscillator
|
12 |
+
- Moving Average Convergence Divergence (MACD)
|
13 |
+
- Double Exponential Moving Average (DEMA)
|
14 |
+
- Money Flow Index (MFI)
|
15 |
+
- Percentage Price Oscillator (PPO)
|
16 |
+
- Rate of Change (ROC)
|
17 |
+
- Relative Strength Index (RSI)
|
18 |
+
- Exponential Moving Average (EMA)
|
19 |
+
- Kaufman's Adaptive Moving Average (KAMA)
|
20 |
+
- MESA Adaptive Moving Average (MESA)
|
21 |
+
- Impulse
|
22 |
|
23 |
+
Strategy Description:
|
|
|
24 |
|
25 |
+
The strategy employs various technical indicators to generate signals on a 4-hour time frame.
|
|
|
|
|
|
|
26 |
|
27 |
+
Buy Signals: Any of the technical indicators can produce a buy signal when specific buying conditions are met. Each of these indicators has its own set of buy signal conditions.
|
|
|
28 |
|
29 |
+
Sell Signals: Similarly, any of the technical indicators can produce a sell signal when specific selling conditions are met.
|
|
|
|
|
30 |
|
31 |
+
Minimum Signal Counts: According to this strategy, a minimum of 16 buy signals or 12 sell signals are required for each buy or sell trade, respectively. This means that a combination of signals meeting this count triggers a trade.
|
|
|
32 |
|
33 |
+
The strategy involves searching for combinations that individually produce buy and sell signals for each technical indicator before confirming a trade.
|
|
|
|
|
34 |
|
35 |
+
Risk management is crucial. Setting stop-loss and take-profit levels for each trade is essential to limit potential losses and realize gains.
|
36 |
|
37 |
+
This strategy aims to trade using a variety of technical indicators and relies on the intersections and conditions of these indicators to generate buy and sell signals. However, it is important to remember that no strategy can guarantee profits without risk. Testing the strategy in a demo account and applying risk management strategies are essential before trading live.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|