iulia-elisa commited on
Commit
3987fe2
1 Parent(s): ac88be7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +70 -1
README.md CHANGED
@@ -3,6 +3,7 @@ tags:
3
  - instance-segmentation
4
  - Vision Transformers
5
  - CNN
 
6
  license: mit
7
  datasets:
8
  - iulia-elisa/XAMI-dataset
@@ -20,4 +21,72 @@ Check the **[XAMI model](https://github.com/ESA-Datalabs/XAMI-model)** and the *
20
  | :---: | :---: |
21
  | YOLOv8 |[yolov8_segm](https://huggingface.co/iulia-elisa/XAMI/blob/main/yolo_weights/best.pt) |
22
  | MobileSAM |[sam_vit](https://huggingface.co/iulia-elisa/XAMI/blob/main/sam_weights/sam_0_best.pth) |
23
- | XAMI |[xami_model](https://huggingface.co/iulia-elisa/XAMI/blob/main/yolo_sam_final.pth) |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  - instance-segmentation
4
  - Vision Transformers
5
  - CNN
6
+ pretty_name: XAMI-model
7
  license: mit
8
  datasets:
9
  - iulia-elisa/XAMI-dataset
 
21
  | :---: | :---: |
22
  | YOLOv8 |[yolov8_segm](https://huggingface.co/iulia-elisa/XAMI/blob/main/yolo_weights/best.pt) |
23
  | MobileSAM |[sam_vit](https://huggingface.co/iulia-elisa/XAMI/blob/main/sam_weights/sam_0_best.pth) |
24
+ | XAMI |[xami_model](https://huggingface.co/iulia-elisa/XAMI/blob/main/yolo_sam_final.pth) |
25
+
26
+ ## 💫 Introduction
27
+ The code uses images from the XAMI dataset (available on [Github](https://github.com/ESA-Datalabs/XAMI-dataset) and [HuggingFace🤗](https://huggingface.co/datasets/iulia-elisa/XAMI-dataset)). The images represent observations from the XMM-Newton's Opical Monitor (XMM-OM). Information about the XMM-OM can be found here:
28
+
29
+ - XMM-OM User's Handbook: https://www.mssl.ucl.ac.uk/www_xmm/ukos/onlines/uhb/XMM_UHB/node1.html.
30
+ - Technical details: https://www.cosmos.esa.int/web/xmm-newton/technical-details-om.
31
+ - The article https://ui.adsabs.harvard.edu/abs/2001A%26A...365L..36M/abstract.
32
+
33
+ ## 📂 Cloning the repository
34
+
35
+ ```bash
36
+ git clone https://github.com/ESA-Datalabs/XAMI-model.git
37
+ cd XAMI-model
38
+
39
+ # creating the environment
40
+ conda env create -f environment.yaml
41
+ conda activate xami_model_env
42
+ ```
43
+
44
+ ## 📊 Downloading the dataset and model checkpoints from HuggingFace🤗
45
+
46
+ Check [dataset_and_model.ipynb](https://github.com/ESA-Datalabs/XAMI-model/blob/main/dataset_and_model.ipynb) for downloading the dataset and model weights.
47
+
48
+ The dataset is splited into train and validation categories and contains annotated artefacts in COCO format for Instance Segmentation. We use multilabel Stratified K-fold (k=4) to balance class distributions across splits. We choose to work with a single dataset splits version (out of 4) but also provide means to work with all 4 versions.
49
+
50
+ To better understand our dataset structure, please check the [Dataset-Structure.md](https://github.com/ESA-Datalabs/XAMI-dataset/blob/main/Datasets-Structure.md) for more details. We provide the following dataset formats: COCO format for Instance Segmentation (commonly used by [Detectron2](https://github.com/facebookresearch/detectron2) models) and YOLOv8-Seg format used by [ultralytics](https://github.com/ultralytics/ultralytics).
51
+
52
+ <!-- 1. **Downloading** the dataset archive from [HuggingFace](https://huggingface.co/datasets/iulia-elisa/XAMI-dataset/blob/main/xami_dataset.zip).
53
+
54
+ ```bash
55
+ DEST_DIR='.' # destination folder for the dataset (should usually be set to current directory)
56
+
57
+ huggingface-cli download iulia-elisa/XAMI-dataset xami_dataset.zip --repo-type dataset --local-dir "$DEST_DIR" && unzip "$DEST_DIR/xami_dataset.zip" -d "$DEST_DIR" && rm "$DEST_DIR/xami_dataset.zip"
58
+ ``` -->
59
+
60
+ ## 💡 Model Inference
61
+
62
+ After cloning the repository and setting up the environment, use the following Python code for model loading and inference:
63
+
64
+ ```python
65
+ import sys
66
+ from inference.xami_inference import Xami
67
+
68
+ detr_checkpoint = './train/weights/yolo_weights/yolov8_detect_300e_best.pt'
69
+ sam_checkpoint = './train/weights/sam_weights/sam_0_best.pth'
70
+
71
+ # the SAM checkpoint and model_type (vit_h, vit_t, etc.) must be compatible
72
+ detr_sam_pipeline = Xami(
73
+ device='cuda:0',
74
+ detr_checkpoint=detr_checkpoint, #YOLO(detr_checkpoint)
75
+ sam_checkpoint=sam_checkpoint,
76
+ model_type='vit_t',
77
+ use_detr_masks=True)
78
+
79
+ # prediction example
80
+ masks = yolo_sam_pipeline.run_predict(
81
+ './example_images/S0743200101_V.jpg',
82
+ yolo_conf=0.2,
83
+ show_masks=True)
84
+ ```
85
+
86
+ ## 🚀 Training the model
87
+
88
+ Check the training [README.md](https://github.com/ESA-Datalabs/XAMI-model/blob/main/train/README.md).
89
+
90
+ ## © Licence
91
+
92
+ This project is licensed under [MIT license](LICENSE).