pereza commited on
Commit
a33e081
1 Parent(s): f520d6d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +32 -30
README.md CHANGED
@@ -18,24 +18,24 @@ tags:
18
 
19
  # Europe Reanalysis Super Resolution
20
 
21
- The aim of the project is to create a Machine learning (ML) model that can generate high-resolution regional reanalysis data (similar to the one produced by CERRA) by downscaling global reanalysis data from ERA5.
 
22
 
 
 
 
23
 
24
- This will be accomplished by using state-of-the-art Deep Learning (DL) techniques like U-Net, conditional GAN, and diffusion models (among others). Additionally, an ingestion module will be implemented to assess the possible benefit of using CERRA pseudo-observations as extra predictors. Once the model is designed and trained, a detailed validation framework takes the place.
 
 
25
 
 
 
26
 
27
- It combines classical deterministic error metrics with in-depth validations, including time series, maps, spatio-temporal correlations, and computer vision metrics, disaggregated by months, seasons, and geographical regions, to evaluate the effectiveness of the model in reducing errors and representing physical processes. This level of granularity allows for a more comprehensive and accurate assessment, which is critical for ensuring that the model is effective in practice.
 
28
 
29
-
30
- Moreover, tools for interpretability of DL models can be used to understand the inner workings and decision-making processes of these complex structures by analyzing the activations of different neurons and the importance of different features in the input data.
31
-
32
- This work is funded by [Code for Earth 2023](https://codeforearth.ecmwf.int/) initiative. The model **ConvSwin2SR** is released in Apache 2.0, making it usable without restrictions anywhere.
33
-
34
-
35
-
36
-
37
-
38
- # Table of Contents
39
 
40
  - [Model Card for Europe Reanalysis Super Resolution](#model-card-for--model_id-)
41
  - [Table of Contents](#table-of-contents)
@@ -57,8 +57,10 @@ This work is funded by [Code for Earth 2023](https://codeforearth.ecmwf.int/) in
57
  - [Metrics](#metrics)
58
  - [Results](#results)
59
  - [Technical Specifications](#technical-specifications-optional)
60
- - [Model Architecture and Objective](#model-architecture-and-objective)
61
- - [Loss function](#loss-function)
 
 
62
  - [Computing Infrastructure](#computing-infrastructure)
63
  - [Hardware](#hardware)
64
  - [Software](#software)
@@ -185,12 +187,6 @@ across different temporal segments.
185
  The testing data samples correspond to the three-year period from 2018 to 2020, inclusive. This segment is crucial for assessing the model's real-world applicability and
186
  its performance on the most recent data points, ensuring its relevance and reliability in current and future scenarios.
187
 
188
-
189
- ### Factors
190
-
191
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
192
-
193
-
194
  ## Results
195
 
196
  In our evaluation, the proposed model displayed a significant enhancement over the established baseline, which employs bicubic interpolation for the same task.
@@ -210,23 +206,29 @@ In comparison to the bicubic interpolation baseline, our model's superior predic
210
  ![rmse](metric_global_map_diff_var-rmse.png)
211
 
212
 
213
-
214
  # Technical Specifications
215
 
216
- ## Model Architecture and Objective
 
 
 
217
 
218
- The model architecture is based on the original Swin2 architecture for Super Resolution (SR) tasks. The library [transformers](https://github.com/huggingface/transformers) is used to simplify the model design.
219
 
220
- ![architecture](architecture.png)
221
 
222
- The main component of the model is a [transformers.Swin2SRModel](https://huggingface.co/docs/transformers/model_doc/swin2sr#transformers.Swin2SRModel) which increases x8 the spatial resolution of its inputs (Swin2SR only supports upscaling ratios power of 2).
223
- As the real upscale ratio is ~5 and the output shape of the region considered is (160, 240), a Convolutional Neural Network (CNN) is included as a pre-process component which convert the inputs into a (20, 30) feature maps that can be fed to the Swin2SRModel.
 
 
 
224
 
225
- This network is trained to learn the residuals of the bicubic interpolation.
226
 
227
- The specific parameters of this network are available in [config.json](https://huggingface.co/predictia/convswin2sr_mediterranean/blob/main/config.json).
 
228
 
229
- ## Loss function
230
 
231
  The Swin2 transformer optimizes its parameters using a composite loss function that aggregates multiple \( \mathcal{L}_1 \) loss terms to enhance its predictive
232
  accuracy across different resolutions and representations:
 
18
 
19
  # Europe Reanalysis Super Resolution
20
 
21
+ The aim of the project is to create a Machine learning (ML) model that can generate high-resolution regional reanalysis data (similar to the one produced by CERRA) by
22
+ downscaling global reanalysis data from ERA5.
23
 
24
+ This will be accomplished by using state-of-the-art Deep Learning (DL) techniques like U-Net, conditional GAN, and diffusion models (among others). Additionally,
25
+ an ingestion module will be implemented to assess the possible benefit of using CERRA pseudo-observations as extra predictors. Once the model is designed and trained,
26
+ a detailed validation framework takes the place.
27
 
28
+ It combines classical deterministic error metrics with in-depth validations, including time series, maps, spatio-temporal correlations, and computer vision metrics,
29
+ disaggregated by months, seasons, and geographical regions, to evaluate the effectiveness of the model in reducing errors and representing physical processes.
30
+ This level of granularity allows for a more comprehensive and accurate assessment, which is critical for ensuring that the model is effective in practice.
31
 
32
+ Moreover, tools for interpretability of DL models can be used to understand the inner workings and decision-making processes of these complex structures by analyzing
33
+ the activations of different neurons and the importance of different features in the input data.
34
 
35
+ This work is funded by [Code for Earth 2023](https://codeforearth.ecmwf.int/) initiative. The model **ConvSwin2SR** is released in Apache 2.0, making it usable without
36
+ restrictions anywhere.
37
 
38
+ # Table of Contents
 
 
 
 
 
 
 
 
 
39
 
40
  - [Model Card for Europe Reanalysis Super Resolution](#model-card-for--model_id-)
41
  - [Table of Contents](#table-of-contents)
 
57
  - [Metrics](#metrics)
58
  - [Results](#results)
59
  - [Technical Specifications](#technical-specifications-optional)
60
+ - [Model Architecture](#model-architecture)
61
+ - [Components](#components)
62
+ - [Configuration details](#configuration-details)
63
+ - [Loss function](#loss-function)
64
  - [Computing Infrastructure](#computing-infrastructure)
65
  - [Hardware](#hardware)
66
  - [Software](#software)
 
187
  The testing data samples correspond to the three-year period from 2018 to 2020, inclusive. This segment is crucial for assessing the model's real-world applicability and
188
  its performance on the most recent data points, ensuring its relevance and reliability in current and future scenarios.
189
 
 
 
 
 
 
 
190
  ## Results
191
 
192
  In our evaluation, the proposed model displayed a significant enhancement over the established baseline, which employs bicubic interpolation for the same task.
 
206
  ![rmse](metric_global_map_diff_var-rmse.png)
207
 
208
 
 
209
  # Technical Specifications
210
 
211
+ ## Model Architecture
212
+
213
+ Our model's design is deeply rooted in the Swin2 architecture, specifically tailored for Super Resolution (SR) tasks.
214
+ We've harnessed the [transformers library](https://github.com/huggingface/transformers) to streamline and simplify the model's design.
215
 
216
+ ![Model Architecture](architecture.png)
217
 
218
+ ### Components
219
 
220
+ - **Transformers Component**: Central to our model is the [transformers.Swin2SRModel](https://huggingface.co/docs/transformers/model_doc/swin2sr#transformers.Swin2SRModel). This component amplifies the spatial resolution of its inputs by a factor of 8. Notably, Swin2SR exclusively supports upscaling ratios that are powers of 2.
221
+ - **Convolutional Neural Network (CNN) Component**: Given that our actual upscale ratio is approximately 5 and the designated output shape is (160, 240),
222
+ we've integrated a CNN. This serves as a preprocessing unit, transforming inputs into (20, 30) feature maps suitable for the Swin2SRModel.
223
+
224
+ The underlying objective of this network is to master the residuals stemming from bicubic interpolation.
225
 
226
+ ### Configuration Details
227
 
228
+ For those inclined towards the intricacies of the model, the specific parameters governing its behavior are meticulously detailed in the
229
+ [config.json](https://huggingface.co/predictia/convswin2sr_mediterranean/blob/main/config.json).
230
 
231
+ ### Loss function
232
 
233
  The Swin2 transformer optimizes its parameters using a composite loss function that aggregates multiple \( \mathcal{L}_1 \) loss terms to enhance its predictive
234
  accuracy across different resolutions and representations: