File size: 2,042 Bytes
ee15d10 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 |
# Model Card for None
This model was trained with ClinicaDL. You can find here the
## General information
## Architecture
This model was trained for **classification** and the architecture chosen is **Conv4_FC3**.
**dropout**: 0.0
**latent_space_size**: 2
**feature_size**: 1024
**n_conv**: 4
**io_layer_channels**: 8
**recons_weight**: 1
**kl_weight**: 1
**normalization**: batch
**architecture**: Conv4_FC3
**multi_network**: False
**dropout**: 0.0
**latent_space_dimension**: 64
**latent_space_size**: 2
**selection_metrics**: ['loss']
**label**: diagnosis
**selection_threshold**: 0.0
**gpu**: True
**n_proc**: 32
**batch_size**: 32
**evaluation_steps**: 20
**seed**: 0
**deterministic**: False
**compensation**: memory
**transfer_path**: ../../autoencoders/exp3/maps
**transfer_selection_metric**: loss
**use_extracted_features**: False
**multi_cohort**: False
**diagnoses**: ['AD', 'CN']
**baseline**: True
**normalize**: True
**data_augmentation**: False
**sampler**: random
**n_splits**: 5
**epochs**: 200
**learning_rate**: 1e-05
**weight_decay**: 0.0001
**patience**: 10
**tolerance**: 0.0
**accumulation_steps**: 1
**optimizer**: Adam
**preprocessing_dict**: {'preprocessing': 't1-linear', 'mode': 'roi', 'use_uncropped_image': False, 'roi_list': ['leftHippocampusBox', 'rightHippocampusBox'], 'uncropped_roi': False, 'prepare_dl': False, 'file_type': {'pattern': '*space-MNI152NLin2009cSym_desc-Crop_res-1x1x1_T1w.nii.gz', 'description': 'T1W Image registered using t1-linear and cropped (matrix size 169×208×179, 1 mm isotropic voxels)', 'needed_pipeline': 't1-linear'}}
**mode**: roi
**network_task**: classification
**caps_directory**: $WORK/../commun/datasets/adni/caps/caps_v2021
**tsv_path**: $WORK/Aramis_tools/ClinicaDL_tools/experiments_ADDL/data/ADNI/train
**validation**: KFoldSplit
**num_networks**: 2
**label_code**: {'AD': 0, 'CN': 1}
**output_size**: 2
**input_size**: [1, 50, 50, 50]
**loss**: None
|