File size: 2,712 Bytes
a7a1967 b8c8c1e a7a1967 9c6c144 a7a1967 9c6c144 a7a1967 9c6c144 6c4690e 9c6c144 a7a1967 c01fa5b a7a1967 f7209bc a7a1967 9c6c144 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 |
---
tags:
- image-classification
- pytorch
metrics:
- accuracy
model-index:
- name: SDO_VT1
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8695651888847351
---
# NASA Solar Dynamics Observatory Vision Transformer v.1 (SDO_VT1)
## Authors: Frank Soboczenski, PhD (King's College London)
This Vision Transformer model has been fine tuned
first stage
Transformer models have become the goto standard in natural language processing (NLP). Their performance is often unmatched in tasks such as question answering, classification, summarization, and language translation. Recently, the success of their characteristic sequence to sequence architecture and attention mechanism has also been noted in other domains such as computer vision and achieved equal praise on performance on various vision tasks. In contrast to, for example, Convolutional Neural Networks (CNNs), Transformers achieve higher representation power due to their ability to maximize their large receptive field. However, Vision Transformers also come with increased complexity and computational cost which may deter scientists from choosing such a model. We demonstrate the applicability of a Vision Transformer model (SDOVIS) on SDO data in an active region classification task as well as the benefits of utilizing the HuggingFace libraries, data as well as model repositories, and deployment strategies for inference. We aim to highlight the ease of use of the HuggingFace platform, integration with popular deep learning frameworks such as PyTorch, TensorFlow, or JAX, performance monitoring with Weights and Biases, and the ability to effortlessly utilize pre-trained large scale Transformer models for targeted fine-tuning purposes.
The authors gratefully acknowledge the entire NASA Solar Dynamics Observatory Team.
Additionally, the data used was provided courtesy of NASA/SDO and the AIA, EVE, and HMI science teams.
## Example Images
Drag one of the images below into the inference API field on the upper right.
Additional images for testing can be found at:
[Solar Dynamics Observatory Gallery](https://sdo.gsfc.nasa.gov/gallery/main/search)
You can use the following tags to further select images for testing:
"coronal holes", "loops" or "flares"
You can also choose "active regions" to get a general pool for testing.
#### NASA_SDO_Coronal_Hole
![NASA_SDO_Coronal_Hole](images/NASA_SDO_Coronal_Hole2.jpg)
#### NASA_SDO_Coronal_Loop
![NASA_SDO_Coronal_Loop](images/NASA_SDO_Coronal_Loop.jpg)
#### NASA_SDO_Solar_Flare
![NASA_SDO_Solar_Flare](images/NASA_SDO_Solar_Flare.jpg)
## How to use this Model
|