The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
This is the pre-processed fMRI data and frames sampled from videos in public cc2017 dataset[1], which is used by [2].
-subj01~3_train/test_fmri.pt: The significant voxels(Bonferroni correction, P < 0.05) were considered to be stimulus-activated voxels and used for subsequent analysis.
The following is a comparison of the number of voxel choices:
Method | Subject 1 | Subject 2 | Subject 3 |
---|---|---|---|
MinD-Video | 6016 | 6224 | 3744 |
NeuroClips | 13447 | 14828 | 9114 |
-GT_train/test_3fps.pt: The videos from the cc2017 dataset were downsampled from 30FPS to 3FPS to make a fair comparison with the previous methods.
-GT_train/test_caption/emb.pt: The pre-processed captions and their CLIP embeddings from BLIP-2.
Reference:
[1] Wen, Haiguang, et al. "Neural encoding and decoding with deep learning for dynamic natural vision." Cerebral cortex 28.12 (2018): 4136-4160.
[2] Gong, Zixuan, et al. "NeuroClips: Towards High-fidelity and Smooth fMRI-to-Video Reconstruction." arXiv preprint arXiv:2410.19452 (2024).
- Downloads last month
- 352