Update README.md
Browse files
README.md
CHANGED
@@ -197,24 +197,40 @@ More information needed
|
|
197 |
More information needed
|
198 |
|
199 |
|
200 |
-
# Technical Specifications
|
201 |
|
202 |
## Model Architecture and Objective
|
203 |
|
204 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
205 |
|
206 |
## Compute Infrastructure
|
207 |
|
208 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
209 |
|
210 |
### Hardware
|
211 |
|
212 |
-
|
213 |
|
214 |
-
### Software
|
215 |
|
216 |
-
|
217 |
|
|
|
218 |
|
219 |
|
220 |
### Authors
|
|
|
197 |
More information needed
|
198 |
|
199 |
|
200 |
+
# Technical Specifications
|
201 |
|
202 |
## Model Architecture and Objective
|
203 |
|
204 |
+
The model architecture is based on the original Swin2 arcthitecture for Super Resolution (SR) tasks. The library [transformers](https://github.com/huggingface/transformers) is used to simplify the model design.
|
205 |
+
|
206 |
+
The main component of the model is a [transformers.Swin2SRModel](https://huggingface.co/docs/transformers/model_doc/swin2sr#transformers.Swin2SRModel) which increases x8 the spatial resolution of its inputs (Swin2SR only supports upscaling ratios power of 2).
|
207 |
+
As the real upscale ratio is ~5 and the output shape of the region considered is (160, 240), a Convolutional Neural Network (CNN) is included as a pre-process component which convert the inputs into a (40, 60) feature maps that can be fed to the Swin2SRModel.
|
208 |
+
|
209 |
+
This network is trained to learn the residuals of the bicubic interpolation.
|
210 |
+
|
211 |
+
The specific parameters of this networks are available in [config.json](https://huggingface.co/predictia/convswin2sr_mediterranean/blob/main/config.json).
|
212 |
+
|
213 |
|
214 |
## Compute Infrastructure
|
215 |
|
216 |
+
The use of GPUs in deep learning projects significantly accelerates model training and inference, leading to substantial reductions in computation time and making it feasible to tackle complex tasks and large datasets with efficiency.
|
217 |
+
|
218 |
+
The generosity and collaboration of our partners are instrumental to the success of this projects, significantly contributing to our research and development endeavors.
|
219 |
+
|
220 |
+
#### :pray: Our resource providers :pray:
|
221 |
+
|
222 |
+
- **AI4EOSC**: AI4EOSC stands for "Artificial Intelligence for the European Open Science Cloud." The European Open Science Cloud (EOSC) is a European Union initiative that aims to create a federated environment of research data and services. AI4EOSC is a specific project or initiative within the EOSC framework that focuses on the integration and application of artificial intelligence (AI) technologies in the context of open science.
|
223 |
+
|
224 |
+
- **European Weather Cloud**: The European Weather Cloud is the cloud-based collaboration platform for meteorological application development and operations in Europe. Services provided range from delivery of weather forecast data and products to the provision of computing and storage resources, support and expert advice.
|
225 |
|
226 |
### Hardware
|
227 |
|
228 |
+
For our project, we have deployed two virtual machines (VMs), each featuring a dedicated Graphics Processing Unit (GPU). One VM is equipped with a 16GB GPU, while the other boasts a more substantial 20GB GPU. This resource configuration allows us to efficiently manage a wide range of computing tasks, from data processing to deep learning, and ultimately drives the successful execution of our project.
|
229 |
|
|
|
230 |
|
231 |
+
### Software
|
232 |
|
233 |
+
The code used to train and evaluate this model is freely available through its GitHub Repository [ECMWFCode4Earth/DeepR](https://github.com/ECMWFCode4Earth/DeepR) hosted in the ECWMF Code 4 Earth organization.
|
234 |
|
235 |
|
236 |
### Authors
|