tags:
- Instance Segmentation
- Vision Transformers
- CNN
- Optical Space Missions
- Artefact Mapping
- ESA
pretty_name: XAMI-model
license: mit
datasets:
- iulia-elisa/XAMI-dataset
XAMI-model: XMM-Newton optical Artefact Mapping for astronomical Instance segmentation
🚀 Check the XAMI model which uses this dataset.
💫 Introduction
The code uses images from the XAMI dataset (available on Github and HuggingFace🤗). The images represent observations from the XMM-Newton's Opical Monitor (XMM-OM). Information about the XMM-OM can be found here:
- XMM-OM User's Handbook: https://www.mssl.ucl.ac.uk/www_xmm/ukos/onlines/uhb/XMM_UHB/node1.html.
- Technical details: https://www.cosmos.esa.int/web/xmm-newton/technical-details-om.
- The article https://ui.adsabs.harvard.edu/abs/2001A%26A...365L..36M/abstract.
📂 Cloning the repository
git clone https://github.com/ESA-Datalabs/XAMI-model.git
cd XAMI-model
# creating the environment
conda env create -f environment.yaml
conda activate xami_model_env
📊 Downloading the dataset and model checkpoints from HuggingFace🤗
Check dataset_and_model.ipynb for downloading the dataset and model weights.
The dataset is splited into train and validation categories and contains annotated artefacts in COCO format for Instance Segmentation. We use multilabel Stratified K-fold (k=4) to balance class distributions across splits. We choose to work with a single dataset splits version (out of 4) but also provide means to work with all 4 versions.
To better understand our dataset structure, please check the Dataset-Structure.md for more details. We provide the following dataset formats: COCO format for Instance Segmentation (commonly used by Detectron2 models) and YOLOv8-Seg format used by ultralytics.
💡 Model Inference
After cloning the repository and setting up the environment, use the following Python code for model loading and inference:
import sys
from inference.xami_inference import Xami
detr_checkpoint = './train/weights/yolo_weights/yolov8_detect_300e_best.pt'
sam_checkpoint = './train/weights/sam_weights/sam_0_best.pth'
# the SAM checkpoint and model_type (vit_h, vit_t, etc.) must be compatible
detr_sam_pipeline = Xami(
device='cuda:0',
detr_checkpoint=detr_checkpoint, #YOLO(detr_checkpoint)
sam_checkpoint=sam_checkpoint,
model_type='vit_t',
use_detr_masks=True)
# prediction example
masks = yolo_sam_pipeline.run_predict(
'./example_images/S0743200101_V.jpg',
yolo_conf=0.2,
show_masks=True)
🚀 Training the model
Check the training README.md.
© Licence
This project is licensed under MIT license.