Update README.md
Browse files
README.md
CHANGED
@@ -19,18 +19,16 @@ model-index:
|
|
19 |
|
20 |
# NASA Solar Dynamics Observatory Vision Transformer v.1 (SDO_VT1)
|
21 |
|
22 |
-
## Authors:
|
|
|
|
|
23 |
|
24 |
-
|
|
|
|
|
25 |
|
26 |
-
|
27 |
-
|
28 |
-
Transformer models have become the goto standard in natural language processing (NLP). Their performance is often unmatched in tasks such as question answering, classification, summarization, and language translation. Recently, the success of their characteristic sequence to sequence architecture and attention mechanism has also been noted in other domains such as computer vision and achieved equal praise on performance on various vision tasks. In contrast to, for example, Convolutional Neural Networks (CNNs), Transformers achieve higher representation power due to their ability to maximize their large receptive field. However, Vision Transformers also come with increased complexity and computational cost which may deter scientists from choosing such a model. We demonstrate the applicability of a Vision Transformer model (SDOVIS) on SDO data in an active region classification task as well as the benefits of utilizing the HuggingFace libraries, data as well as model repositories, and deployment strategies for inference. We aim to highlight the ease of use of the HuggingFace platform, integration with popular deep learning frameworks such as PyTorch, TensorFlow, or JAX, performance monitoring with Weights and Biases, and the ability to effortlessly utilize pre-trained large scale Transformer models for targeted fine-tuning purposes.
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
The authors gratefully acknowledge the entire NASA Solar Dynamics Observatory Team.
|
33 |
-
Additionally, the data used was provided courtesy of NASA/SDO and the AIA, EVE, and HMI science teams.
|
34 |
|
35 |
## Example Images
|
36 |
--> Drag one of the images below into the inference API field on the upper right.
|
|
|
19 |
|
20 |
# NASA Solar Dynamics Observatory Vision Transformer v.1 (SDO_VT1)
|
21 |
|
22 |
+
## Authors:
|
23 |
+
[Frank Soboczenski](https://h21k.github.io/), King's College London, London, UK
|
24 |
+
[Paul Wright](https://www.wrightai.com/), Wright AI Ltd, Leeds, UK
|
25 |
|
26 |
+
## General:
|
27 |
+
This Vision Transformer model has been fine-tuned on Solar Dynamics Observatory (SDO) data. The images used are available here:
|
28 |
+
[Solar Dynamics Observatory Gallery](https://sdo.gsfc.nasa.gov/gallery/main/search). This is the first version of a Vision Transformer model on SDO data in an active region classification task. We aim to highlight the ease of use of the HuggingFace platform, integration with popular deep learning frameworks such as PyTorch, TensorFlow, or JAX, performance monitoring with Weights and Biases, and the ability to effortlessly utilize pre-trained large scale Transformer models for targeted fine-tuning purposes. This is to our knowledge the first Vision Transformer model on NASA SDO mission data and we are working on additional versions to address challenges in this domain.
|
29 |
|
30 |
+
The data used was provided courtesy of NASA/SDO and the AIA, EVE, and HMI science teams.
|
31 |
+
The authors gratefully acknowledge the entire NASA Solar Dynamics Observatory Mission Team.
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
## Example Images
|
34 |
--> Drag one of the images below into the inference API field on the upper right.
|