kenobi commited on
Commit
9c6c144
1 Parent(s): 6c4690e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -4
README.md CHANGED
@@ -19,13 +19,27 @@ model-index:
19
 
20
  # NASA Solar Dynamics Observatory Vision Transformer v.1 (SDO_VT1)
21
 
22
- TEXT TO BE ADDED
23
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  ## Example Images
26
- (drag one of them into the inference API field on the upper right)
27
 
28
- Additional images for testing can be found at: [Solar Dynamics Observatory Gallery](https://sdo.gsfc.nasa.gov/gallery/main/search)
 
 
 
 
29
 
30
  #### NASA_SDO_Coronal_Hole
31
 
@@ -37,4 +51,7 @@ Additional images for testing can be found at: [Solar Dynamics Observatory Galle
37
 
38
  #### NASA_SDO_Solar_Flare
39
 
40
- ![NASA_SDO_Solar_Flare](images/NASA_SDO_Solar_Flare.jpg)
 
 
 
 
19
 
20
  # NASA Solar Dynamics Observatory Vision Transformer v.1 (SDO_VT1)
21
 
22
+ ## Authors: Frank Soboczenski, PhD (King's College London)
23
 
24
+ This Vision Transformer model has been fine tuned
25
+
26
+ first stage
27
+
28
+ Transformer models have become the goto standard in natural language processing (NLP). Their performance is often unmatched in tasks such as question answering, classification, summarization, and language translation. Recently, the success of their characteristic sequence to sequence architecture and attention mechanism has also been noted in other domains such as computer vision and achieved equal praise on performance on various vision tasks. In contrast to, for example, Convolutional Neural Networks (CNNs), Transformers achieve higher representation power due to their ability to maximize their large receptive field. However, Vision Transformers also come with increased complexity and computational cost which may deter scientists from choosing such a model. We demonstrate the applicability of a Vision Transformer model (SDOVIS) on SDO data in an active region classification task as well as the benefits of utilizing the HuggingFace libraries, data as well as model repositories, and deployment strategies for inference. We aim to highlight the ease of use of the HuggingFace platform, integration with popular deep learning frameworks such as PyTorch, TensorFlow, or JAX, performance monitoring with Weights and Biases, and the ability to effortlessly utilize pre-trained large scale Transformer models for targeted fine-tuning purposes.
29
+
30
+
31
+
32
+ The authors gratefully acknowledge the entire NASA Solar Dynamics Observatory Team.
33
+ Additionally, the data used was provided courtesy of NASA/SDO and the AIA, EVE, and HMI science teams.
34
 
35
  ## Example Images
36
+ Drag one of the images below into the inference API field on the upper right.
37
 
38
+ Additional images for testing can be found at:
39
+ [Solar Dynamics Observatory Gallery](https://sdo.gsfc.nasa.gov/gallery/main/search)
40
+ You can use the following tags to further select images for testing:
41
+ "coronal holes", "loops" or "flares"
42
+ You can also choose "active regions" to get a general pool for testing.
43
 
44
  #### NASA_SDO_Coronal_Hole
45
 
 
51
 
52
  #### NASA_SDO_Solar_Flare
53
 
54
+ ![NASA_SDO_Solar_Flare](images/NASA_SDO_Solar_Flare.jpg)
55
+
56
+ ## How to use this Model
57
+