id
stringlengths 10
10
| title
stringlengths 5
246
| abstract
stringlengths 42
3.32k
| authors
stringlengths 5
21.5k
| published_date
timestamp[s] | link
stringlengths 33
34
| markdown
stringlengths 140
1.08M
| abstract_ja
stringlengths 0
1.35k
|
---|---|---|---|---|---|---|---|
2305.00379 | Image Completion via Dual-path Cooperative Filtering | Given the recent advances with image-generating algorithms, deep image
completion methods have made significant progress. However, state-of-art
methods typically provide poor cross-scene generalization, and generated masked
areas often contain blurry artifacts. Predictive filtering is a method for
restoring images, which predicts the most effective kernels based on the input
scene. Motivated by this approach, we address image completion as a filtering
problem. Deep feature-level semantic filtering is introduced to fill in missing
information, while preserving local structure and generating visually realistic
content. In particular, a Dual-path Cooperative Filtering (DCF) model is
proposed, where one path predicts dynamic kernels, and the other path extracts
multi-level features by using Fast Fourier Convolution to yield semantically
coherent reconstructions. Experiments on three challenging image completion
datasets show that our proposed DCF outperforms state-of-art methods. | Pourya Shamsolmoali, Masoumeh Zareapoor, Eric Granger | 2023-04-30T03:54:53 | http://arxiv.org/abs/2305.00379v1 | # Image Completion via Dual-Path Cooperative Filtering
###### Abstract
Given the recent advances with image-generating algorithms, deep image completion methods have made significant progress. However, state-of-art methods typically provide poor cross-scene generalization, and generated masked areas often contain blurry artifacts. Predictive filtering is a method for restoring images, which predicts the most effective kernels based on the input scene. Motivated by this approach, we address image completion as a filtering problem. Deep feature-level semantic filtering is introduced to fill in missing information, while preserving local structure and generating visually realistic content. In particular, a Dual-path Cooperative Filtering (DCF) model is proposed, where one path predicts dynamic kernels, and the other path extracts multi-level features by using Fast Fourier Convolution to yield semantically coherent reconstructions. Experiments on three challenging image completion datasets show that our proposed DCF outperforms state-of-art methods.
Pourya Shamsolmoali\({}^{1}\), Masoumeh Zareapoor\({}^{2}\), Eric Granger\({}^{3}\)\({}^{1}\)Shanghai Key Laboratory of Multidimensional Information Processing, East China Normal University, China
\({}^{2}\)School of Automation, Shanghai Jiao Tong University, China
\({}^{3}\)Lab. d'imagerie, de vision et d'intelligence artificielle, Dept. of Systems Eng., ETS, Canada Image Completion, Image Inpainting, Deep Learning.
## 1 Introduction
The objective of image completion (inpainting) is to recover images by reconstructing missing regions. Images with inpainted details must be visually and semantically consistent. Therefore, robust generation is required for inpainting methods. Generative adversarial networks (GANs) [2, 18] or auto-encoder networks [16, 20, 21] are generally used in current state-of-the-art models [10, 11, 19] to perform image completion. In these models, the input image is encoded into a latent space by generative network-based inpainting, which is then decoded to generate a new image. The quality of inpainting is entirely dependent on the data and training approach, since the procedure ignores priors (for example smoothness among nearby pixels or features). It should be noted that, unlike the generating task, image inpainting has its own unique challenges. First, image inpainting requires that the completed images be clean, high-quality, and natural. These constraints separate image completion from the synthesis tasks, which focuses only on naturalness. Second, missing regions may appear in different forms, and the backgrounds could be from various scenes. Given these constraints, it is important for the inpainting method to have a strong capacity to generalize across regions that are missing. Recent generative networks have made substantial progress in image completion, but they still have a long way to go before they can address the aforementioned problems.
For instance, RFRNet [7] uses feature reasoning on the auto-encoder architecture for the task of image inpainting. As shown in Fig. 1, RFRNet produces some artifacts in output images. JPGNet and MISF [5, 8] are proposed to address generative-based inpainting problems [7, 12, 15] by reducing artifacts using image-level predictive filtering. Indeed, image-level predictive filtering reconstructs pixels from neighbors, and filtering kernels are computed adaptively based on the inputs. JPGNet is therefore able to retrieve the local structure while eliminating artifacts. As seen in Fig. 1, JPGNet's artifacts are more efficiently smoother than RFRNet's. However, many details may be lost, and the actual structures are not reconstructed. LaMa [19] is a recent image inpainting approach that uses Fast Fourier Convolution (FFC) [3] inside their ResNet-based LaMa-Fourier model to address the lack of receptive field for producing repeated patterns in the missing areas. Previously, researchers struggled with global self-attention [22] and its computational complexity, and they were still unable to perform satisfactory recovery for repeated man-made structures as effectively as with LaMa. Nonetheless, as the missing regions get bigger and pass the object boundary, LaMa creates faded structures.
Figure 1: Examples of an image completed with our DCF model compared to baseline methods on the Paris dataset. DCF generates high-fidelity and more realistic images.
In [12], authors adopts LaMa as the base network, and can captures various types of missing information by utilizing additional types of masks. They use more damaged images in the training phase to improve robustness. However, such a training strategy is unproductive. Transformer-based approaches [20, 23] recently have attracted considerable interest, despite the fact that the structures can only be estimated within a low-resolution coarse image, and good textures cannot be produced beyond this point. Recent diffusion-based inpainting models [13, 17] have extended the limitations of generative models by using image information to sample the unmasked areas or use a score-based formulation to generate unconditional inpainted images, however, these approaches are not efficient in real-world applications.
To address this problem, we introduce a new neural network architecture that is motivated by the predictive filtering on adaptability and use large receptive field for producing repeating patterns. In particular, this paper makes two key contributions. First, semantic filtering is introduced to fill the missing image regions by expanding image-level filtering into a feature-level filtering. Second, a Dual-path Cooperative Filtering (DCF) model is introduced that integrates two semantically connected networks - a kernel prediction network, and a semantic image filtering network to enhance image details.
The semantic filtering network supplies multi-level features to the kernel prediction network, while the kernel prediction network provides dynamic kernels to the semantic filtering network. In addition, for efficient reuse of high-frequency features, FFC [3] residual blocks are utilized in the semantic filtering network to better synthesize the missing regions of an image, leading to improved performance on textures and structures. By linearly integrating neighboring pixels or features, DCF is capable of reconstructing them with a smooth prior across neighbors. Therefore, DCF utilizes both semantic and pixel-level filling for accurate inpainting. Following Fig. 1, the propose model produces high-fidelity and realistic images. Furthermore, in comparison with existing methods, our technique involves a dual-path network with a dynamic convolutional operation that modifies the convolution parameters based on different inputs, allowing to have strong generalization. A comprehensive set of experiments conducted on three challenging benchmark datasets (CelebA-HQ [6], Places2 [24], and Paris StreetView [4]), shows that our proposed method yields better qualitative and quantitative results than state-of-art methods.
## 2 Methodology
Predictive filtering is a popular method for restoring images that is often used for image denoising tasks [14]. We define image completion as pixel-wise predictive filtering:
\[I_{c}=I_{m}\vartriangle T, \tag{1}\]
in which \(I_{c}\in\mathbb{R}^{(H\times W\times 3)}\) represents a complete image, \(I_{m}\in\mathbb{R}^{(H\times W\times 3)}\) denotes the input image with missing regions from the ground truth image \(I_{gr}\in\mathbb{R}^{(H\times W\times 3)}\). The tensor \(T\in\mathbb{R}^{(H\times W\times N^{2})}\) has \(HW\) kernels for filtering each pixel and the pixel-wise filtering operation is indicated by the operation \({}^{\prime}\vartriangle^{\prime}\). Rather than using image-level filtering, we perform the double-path feature-level filtering, to provides more context information. Our idea is that, even if a large portion of the image is destroyed, semantic information can be maintained. To accomplish semantic filtering, we initially use an auto-encoder network in which the encoder extracts features of the damaged image \(I_{m}\), and the decoder maps the extracted features to the complete image \(I_{c}\). Therefore, the encoder can be defined by:
\[f_{L}=\rho(I_{m})=\rho_{L}(...\rho_{l}(...\rho_{2}(\rho_{1}(I_{m})))), \tag{2}\]
in which \(\rho(.)\) denotes the encoder while \(f_{l}\) represents the feature taken from the deeper layers (\(l^{th}\)), \(f_{l}=\rho_{l}(f_{l-1})\). For instance, \(f_{l}\) shows the last layer's result of \(\rho(.)\).
In our encoder network, to create remarkable textures and semantic structures within the missing image regions, we adopt Fast Fourier Convolutional Residual Blocks (FFC-Res) [19]. The FFC-Res shown in Fig. 2 (b) has two FFC layers. The channel-wise Fast Fourier Transform (FFT) [1] is the core of the FFC layer [3] to provide a whole image-wide receptive field. As shown in Fig. 2 (c), the FFC layer divides channels into two branches: a) a local branch, which utilizes standard convolutions to capture spatial information, and b) a global branch, which employs a Spectral Transform module to analyze global structure and capture long-range context.
Figure 2: Overview of the proposed architecture. (a) Our proposed DCF inpainting network with (b) FFC residual block to have a larger receptive field. (c) and (d) show the architecture of the FFC and Spectral Transform layers, respectively.
Outputs of the local and global branches are then combined. Two Fourier Units (FU) are used by the Spectral Transform layer (Fig. 2 (d)) in order to capture both global and semi-global features. The FU on the left represents the global context. In contrast, the Local Fourier Unit on the right side of the image takes in one-fourth of the channels and focuses on the semi-global image information. In a FU, the spatial structure is generally decomposed into image frequencies using a Real FFT2D operation, a frequency domain convolution operation, and ultimately recovering the structure via an Inverse FFT2D operation. Therefore, based on the encoder the network of our decoder is defined as:
\[I_{c}=\rho^{-1}(f_{L}), \tag{3}\]
in which \(\rho^{-1}(.)\) denotes the decoder. Then, similar to image-level filtering, we perform semantic filtering on extracted features according to:
\[\hat{f}_{l}[r]=\sum_{s\in\mathcal{N}_{\kappa}}T_{\kappa}^{l}[s-r]f_{l}[s], \tag{4}\]
in which \(r\) and \(s\) denote the image pixels' coordinates, whereas the \(\mathcal{N}_{\kappa}\) consist of \(N^{2}\) closest pixels. \(T_{\kappa}^{l}\) signifies the kernel for filtering the \(\kappa^{th}\) component of \(T_{l}\) through its neighbors \(\mathcal{N}_{\kappa}\). To incorporate every element-wise kernel, we use the matrix \(T_{l}\) as \(T_{\kappa}^{l}\). Following this, Eq. (2) is modified by substituting \(f_{l}\) with \(\hat{f}_{l}\). In addition, we use a predictive network to predict the kernels' behaviour in order to facilitate their adaptation for two different scenes.
\[T_{l}=\varphi_{l}(I_{m}), \tag{5}\]
in which \(\varphi_{l}(.)\) denotes the predictive network to generate \(T_{l}\). In Fig. 2(a) and Table 2, we illustrate our image completion network which consist of \(\rho(.),\rho^{-1},\) and \(\varphi_{l}(.)\). The proposed network is trained using the \(L_{1}\) loss, perceptual loss, adversarial loss, and style loss, similar to predictive filtering.
## 3 Experiments
In this section, the performance of our DCF model is compared to state-of-the-art methods for image completion task. Experiments are carried out on three datasets, CelebA-HQ [6], Places2 [24], and Paris StreetView [4] at \(256\times 256\) resolution images. With all datasets, we use the standard training and testing splits. In both training and testing we use the diverse irregular mask (20%-40% of images occupied by holes) given by PConv [9] and regular center mask datasets. The code is provided at _DCF_.
**Performance Measures:** The structural similarity index (SSIM), peak signal-to-noise ratio (PSNR), and Frechet inception distance (FID) are used as the evaluation metrics.
### Implementation Details
Our proposed model's framework is shown in Table 2.
**Loss functions.** We follow [15] and train the networks using four loss functions, including \(L_{1}\) loss (\(\ell_{1}\)), adversarial loss (\(\ell_{A}\)), style loss (\(\ell_{S}\)), and perceptual loss (\(\ell_{P}\)), to obtain images with excellent fidelity in terms of quality as well as semantic levels. Therefore, we can write the reconstruction loss (\(\ell_{R}\)) as:
\[\ell_{R}=\lambda_{1}\ell_{1}+\lambda_{a}\ell_{A}+\lambda_{p}\ell_{P}+\lambda_ {s}\ell_{S}. \tag{6}\]
\begin{table}
\begin{tabular}{l|c|c||c|c} \hline \multicolumn{4}{c||}{Feature extracting network} & \multicolumn{2}{c}{Predicting network} \\ \hline Layer & In. & Out/size & In. & Out/size \\ \hline \hline conv(7,3,64) & \(I_{m}\) & \(f_{1}\) / 256 & \(I_{m}\) & \(e_{1}\) / 256 \\ conv(4,64,128) & \(f_{1}\) & \(f_{2}\) / 128 & \(e_{1}\) & \(e_{2}\) / 128 \\ pooling & \(f_{2}\) & \(f_{2}\) / 64 & \(e_{2}\) & \(e_{2}\) / 64 \\ conv(4,128,256) & \(f_{2}\) & \(f_{3}\) / 64 & \([f_{2}^{\prime},e_{2}^{\prime}]\) & \(e_{3}\) / 64 \\ \(f_{3}\) \(\
in which \(\lambda_{1}=1\), \(\lambda_{a}=\lambda_{p}=0.1\), and \(\lambda_{s}=250\). More details on the loss functions can be found in [15].
**Training setting.** We use Adam as the optimizer with the learning rate of \(1e-4\) and the standard values for its hyperparameters. The network is trained for 500k iterations and the batch size is 8. The experiments are conducted on the same machine with two RTX-3090 GPUs.
### Comparisons to the Baselines
**Qualitative Results.** The proposed DCF model is compared to relevant baselines such as RFRNet [7], JPGNet [5], and LaMa [19]. Fig. 3 and Fig. 4 show the results for the Places2 and CelebA-HQ datasets respectively. In comparison to JPGNet, our model preserves substantially better recurrent textures, as shown in Fig. 3. Since JPGNet lacks attention-related modules, high-frequency features cannot be successfully utilized due to the limited receptive field. Using FFC modules, our model expanded the receptive field and successfully project source textures on newly generated structures. Furthermore, our model generates superior object boundary and structural data compared to LaMa. Large missing regions over larger pixel ranges limit LaMa from hallucinating adequate structural information. However, ours uses the advantages of the coarse-to-fine generator to generate a more precise object with better boundary. Fig. 4 shows more qualitative evidence. While testing on facial images, RFRNet and LaMa produce faded forehead hairs and these models are not robust enough. The results of our model, nevertheless, have more realistic textures and plausible structures, such as forehead form and fine-grained hair.
**Quantitative Results.** On three datasets, we compare our proposed model with other inpainting models. The results shown in Table 2 lead to the following conclusions: 1) Compared to other approaches, our method outperforms them in terms of PSNR, SSIM, and FID scores for the most of datasets and mask types. Specifically, we achieve 9% higher PNSR on the Places2 dataset's irregular masks than RFRNet. It indicates that our model has advantages over existing methods. 2) We observe similar results while analyzing the FID. On the CelebA-HQ dataset, our method achieves 2.5% relative lower FID than LaMa under the center mask. This result indicates our method's remarkable success in perceptual restoration. 3) The consistent advantages over several datasets and mask types illustrate that our model is highly generalizable.
## 4 Conclusion
Dual-path cooperative filtering (DCF) was proposed in this paper for high-fidelity image inpainting. For predictive filtering at the image and deep feature levels, a predictive network is proposed. In particular, image-level filtering is used for details recovery, whereas deep feature-level filtering is used for semantic information completion. Moreover, in the image-level filtering the FFC residual blocks is adopted to recover semantic information and resulting in high-fidelity outputs. The experimental results demonstrate our model outperforms the state-of-art inpainting approaches.
#### Acknowledgments
This research was supported in part by NSFC China. The corresponding author is Masoumeh Zareapoor.
\begin{table}
\begin{tabular}{l|l|c c|c c|c c} \hline \hline \multirow{3}{*}{} & \multirow{3}{*}{Method} & \multicolumn{3}{c|}{CelebA-HQ} & \multicolumn{3}{c|}{Places2} & \multicolumn{3}{c}{Paris StreetView} \\ \cline{3-8} & & Irregular & Center & Irregular & Center & Irregular & Center \\ \hline \multirow{8}{*}{PSNR\(\uparrow\)} & RFRNet [7] & 26.63 & 21.32 & 22.58 & 18.27 & 23.81 & 19.26 \\ & JPGNet [5] & 25.54 & 22.71 & 23.93 & 19.22 & 24.79 & 20.63 \\ & TFill [23] & 26.84 & 23.65 & 24.32 & 20.49 & 25.46 & 21.85 \\ & LaMa [19] & 27.31 & 24.18 & **25.27** & 21.67 & 25.84 & 22.59 \\ & GLaMa [12] & 28.17 & 25.13 & 25.08 & 21.83 & 26.23 & 22.87 \\ & DCF (ours) & **28.34** & **25.62** & 25.19 & **22.30** & **26.57** & **23.41** \\ \hline \multirow{8}{*}{SSIM\(\uparrow\)} & RFRNet [7] & 0.934 & 0.912 & 0.819 & 0.801 & 0.862 & 0.849 \\ & JPGNet [5] & 0.927 & 0.904 & 0.825 & 0.812 & 0.873 & 0.857 \\ & TFill [23] & 0.933 & 0.907 & 0.826 & 0.814 & 0.870 & 0.857 \\ & LaMa [19] & 0.939 & 0.911 & 0.829 & 0.816 & 0.871 & 0.856 \\ & GLaMa [12] & 0.941 & 0.925 & **0.833** & 0.817 & 0.872 & 0.858 \\ & DCF (ours) & **0.943** & **0.928** & 0.832 & **0.819** & **0.876** & **0.861** \\ \hline \multirow{8}{*}{FID\(\downarrow\)} & RFRNet [7] & 17.07 & 17.83 & 15.56 & 16.47 & 40.23 & 41.08 \\ & JPGNet [5] & 13.92 & 15.71 & 15.14 & 16.23 & 37.61 & 39.24 \\ & TFill [23] & 13.18 & 13.87 & 15.48 & 16.24 & 33.29 & 34.41 \\ & LaMa [19] & 11.28 & 12.95 & 14.73 & 15.46 & 32.30 & 33.26 \\ & GLaMa [12] & 11.21 & 12.91 & 14.70 & 15.35 & 32.12 & 33.07 \\ \cline{2-8} & DCF w.o. Sem-Fil & 14.34 & 15.24 & 17.56 & 18.11 & 42.57 & 44.38 \\ & DCF w.o. FFC & 13.52 & 14.26 & 15.83 & 16.98 & 40.54 & 41.62 \\ & DCF (ours) & **11.13** & **12.63** & **14.52** & **15.09** & **31.96** & **32.85** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study and quantitative comparison of our proposed and state-of-art methods on center and free form masked images from the CelebA-HQ, Places2, and Paris StreetView datasets. | 画像生成アルゴリズムの最近の進歩により、深層画像補完方法は大きく前進しています。しかし、最先端の方法は、通常、シーン間の汎化能力が低いことが知られており、生成されたマスク領域にはぼやけが生じることがあります。予測フィルタリングは、入力シーンに基づいて最も効果的なカーネルを予測する画像の修復方法です。このアプローチを動機づけ、私たちは画像補完をフィルタリング問題として捉えます。深層特徴レベルのセマンティックフィルタリングを導入して欠損情報を補完し、ローカル構造を維持して視覚的に現実的な内容を生成します。特に、動的なカーネルを予測する双方向協力フィルタリング(DCF)モデルが提案されています。一方のパスは動的なカーネルを予測し、もう一方のパスは、高速フーリエ変換を使用して多段階の特徴を抽出することで、セマンティックに整合性のある再構築を実現します |
2307.16362 | High Sensitivity Beamformed Observations of the Crab Pulsar's Radio
Emission | We analyzed four epochs of beamformed EVN data of the Crab Pulsar at 1658.49
MHz. With the high sensitivity resulting from resolving out the Crab Nebula, we
are able to detect even the faint high-frequency components in the folded
profile. We also detect a total of 65951 giant pulses, which we use to
investigate the rates, fluence, phase, and arrival time distributions. We find
that for the main pulse component, our giant pulses represent about 80% of the
total flux. This suggests we have a nearly complete giant pulse energy
distribution, although it is not obvious how the observed distribution could be
extended to cover the remaining 20% of the flux without invoking large numbers
of faint bursts for every rotation. Looking at the difference in arrival time
between subsequent bursts in single rotations, we confirm that the likelihood
of finding giant pulses close to each other is increased beyond that expected
for randomly occurring bursts - some giant pulses consist of causally related
microbursts, with typical separations of $\sim\!30{\rm\;\mu s}$ - but also find
evidence that at separations $\gtrsim\!100{\rm\;\mu s}$ the likelihood of
finding another giant pulse is suppressed. In addition, our high sensitivity
enabled us to detect weak echo features in the brightest pulses (at
$\sim\!0.4\%$ of the peak giant pulse flux), which are delayed by up to
$\sim\!300{\rm\;\mu s}$. | Rebecca Lin, Marten H. van Kerkwijk | 2023-07-31T01:36:55 | http://arxiv.org/abs/2307.16362v2 | # High Sensitivity Beamformed Observations of the Crab Pulsar's Radio Emission
###### Abstract
We analyzed four epochs of beamformed EVN data of the Crab Pulsar at \(1658.49\rm\,MHz\). With the high sensitivity resulting from resolving out the Crab Nebula, we are able to detect even the faint high-frequency components in the folded profile. We also detect a total of \(65951\) giant pulses, which we use to investigate the rates, fluence, phase, and arrival time distributions. We find that for the main pulse component, our giant pulses represent about 80% of the total flux. This suggests we have a nearly complete giant pulse energy distribution, although it is not obvious how the observed distribution could be extended to cover the remaining 20% of the flux without invoking large numbers of faint bursts for every rotation. Looking at the difference in arrival time between subsequent bursts in single rotations, we confirm that the likelihood of finding giant pulses close to each other is increased beyond that expected for randomly occurring bursts - some giant pulses consist of causally related microbursts, with typical separations of \(\sim 30\rm\ \mu s\) - but also find evidence that at separations \(\gtrsim\!100\rm\ \mu s\) the likelihood of finding another giant pulse is suppressed. In addition, our high sensitivity enabled us to detect weak echo features in the brightest pulses (at \(\sim\!0.4\%\) of the peak giant pulse flux), which are delayed by up to \(\sim\!300\rm\ \mu s\).
Pulsars (1306) -- Radio bursts (1339) -- Very long baseline interferometry (1769) 0000-0002-4818-2886]Rebecca Lin
0000-0002-4882-0886]Marten H. van Kerkwijk
0000-0002-4882-0886]D.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A. Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A. Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.A Leiden-Wagner, A. Leiden-Wagner, A.
Investigation of the emission from the Crab Pulsar is complicated by propagation effects along the line of sight, especially at lower frequencies, \(\lesssim 2\ \mathrm{GHz}\). While dispersion can be removed using coherent de-dispersion (either during recording, or afterwards with baseband data), scattering effects are difficult to remove. This includes echoes due to propagation in the Crab Nebula itself, which sometimes are bright and obvious (Backer et al., 2000; Lyne et al., 2001), but can also be quite faint (Driessen et al., 2019), making it difficult to disentangle them from microbursts without having a good pulse sample to look for repeating structure.
Another complication in studying the emission of the Crab Pulsar is the radio-bright nebula in which the pulsar resides. This contributes noise and hence many previous studies relied on long integrations to observe both the weaker pulse components and echoes in the average profile. But the contribution to the noise can be reduced by resolving the nebula, using large dishes or arrays, such as the VLA, Arecibo, and Westerbork (Moffett & Hankins, 1996; Cordes et al., 2004; Karuppusamy et al., 2010; Lewandowska et al., 2022).
In this paper, we use the European VLBI Network (EVN) to resolve out the Crab Nebula and obtain high sensitivity data. In Section 2, we describe our observations and data reduction, and in Section 3, we present the resulting pulse profiles and the components that are detectable at our high sensitivity. We turn to an analysis of GPs in Section 4, investigating their rates, fluence, phase, and arrival time distributions, as well as weak echoes seen in the brightest GPs. We summarize our findings in Section 5.
## 2 Observations and Data Reduction
We analyze observations of the Crab Pulsar taken by the EVN, projects EK036 A-D, at four epochs between 2015 Oct and 2017 May (see Table 1). Throughout these observations, calibrator sources were also observed resulting in breaks in our data. While many dishes participated in these observations, for our analysis we only use telescope data that had relatively clean signals across the frequency range of \(1594.49-1722.49\ \mathrm{MHz}\) in both circular polarizations. At each single dish, real-sampled data were recorded in either 2 bit MARK 5B or VDIF format1, covering the frequency range in either eight contiguous \(16\ \mathrm{MHz}\) wide bands or four contiguous \(32\ \mathrm{MHz}\) wide bands.
Footnote 1: For specifications of MARK5B and VDIF, see [https://www.haystack.mit.edu/haystack-memo-series/mark-5-memos/](https://www.haystack.mit.edu/haystack-memo-series/mark-5-memos/) and [https://vlbi.org/wp-content/uploads/2019/03/VDIF_specification_Release_1.1.1.pdf](https://vlbi.org/wp-content/uploads/2019/03/VDIF_specification_Release_1.1.1.pdf), respectively.
For these datasets, single dish data were processed and then combined coherently to form a tied-array beam as described in Lin et al. (2023). The resulting RFI-removed, normalized, de-dispersed (using dispersion measures (DMs) listed in Table 1), parallactic angle corrected, and phased baseband data were squared to form intensity data. As in Lin et al. (2023), we estimate the system equivalent flux density (SEFD) for the phased EVN array as \((S_{\text{CN}}+\langle S_{\text{tel}}\rangle)/N_{\text{tel}}\approx 140-160\ \mathrm{ Jy}\), where \(S_{\text{CN}}\approx 833\ \mathrm{Jy}\) is the SEFD of the Crab Nebula at our observing frequency (Bietenholz et al., 1997), \(\langle S_{\text{tel}}\rangle\simeq 300\ \mathrm{Jy}\) is the average nominal SEFD of the telescopes2 and \(N_{\text{tel}}=7\ \mathrm{or}\ 8\) is the number of telescopes used. By combining the single dishes into a synthesized beam, we resolve out the radio-bright Crab Nebula and increase our sensitivity, thus allowing us to investigate the weaker radio emission of the Crab Pulsar.
Footnote 2: [http://old.evlbi.org/cgi-bin/EVNcalc](http://old.evlbi.org/cgi-bin/EVNcalc).
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline Observation & & \(t_{\text{sep}}\)a & & & DMc & & & Giant Pulsesd & & \\ & Date & (h) & Telescopes usedb & & & & Giant Pulsesd & \\ & Date & (h) & Telescopes usedb & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Observation and Giant Pulse Log.
## 3 Pulse Profiles
For each of the phased EVN datasets, we create folded pulse profiles using polyco files generated with tempo2(Hobbs and Edwards, 2012) from the monthly Jodrell Bank Crab Pulsar ephemerides3(Lyne et al., 1993) and DM from Table 1. We averaged over all frequencies and used \(512\) phase bins, rotating in phase such that the MP is at phase \(0\). We show the resulting profiles in Figure 1, with each profile scaled to its maximum to ease comparison. With our high sensitivity, we can see all five pulse components expected from the multifrequency overview of Hankins et al. (2015), corresponding to the LFC, MP, IP, HFC1 and HFC2 (with the latter two detected at \(\sim\!1.66\ \mathrm{GHz}\) for the first time).
Footnote 3: [http://www.jb.man.ac.uk/~pulsar/crab.html](http://www.jb.man.ac.uk/~pulsar/crab.html).
We fit the pulse components in the EKO36 datasets with five Gaussians to look for possible changes, both between our epochs and relative to the compilation from Hankins et al. (2015). Our fitted parameters are presented in Table 2, together with the values inferred from Hankins et al. (2015). One sees that the results for our four observations are all consistent. At \(1.4\ \mathrm{GHz}\), Lyne et al. (2013) found that the separations between the MP and IP and between the MP and LFC increase at a rate of \(0\fdg 5\pm 0\fdg 2\) per century and \(11\arcdeg\pm 2\arcdeg\) per century, respectively. Using these rates, we expect pulse phase changes for the IP and LFC of \(\sim\!0\fdg 008\) and \(\sim\!0\fdg 17\), respectively, which are not detectable within our uncertainties.
Comparing with Hankins et al. (2015), we find good agreement in pulse phase for all components (though now we do need to take into account the drift in pulse phase). We noticed, however, that while the widths of our LFC, HFC1 and HFC2 are consistent with those given by Hankins et al. (2015), the widths of the MP and IP seem smaller, even if they are still within the nominal, rather large uncertainties of Hankins et al. (2015). Looking in more detail at their Figure 3 with measurements, one sees considerable scatter for the MP and IP, even though those strong, narrow peaks should be the easiest to measure. This might suggest that some profiles were slightly smeared (e.g., because the data were not dedispersed to exactly the right DM, which is known to vary for the Crab Pulsar, or because of changes in scattering timescale at lower frequencies, see McKee et al., 2018). For a comparison with recent data, we estimated widths from the \(2-4\) and \(4-6\ \mathrm{GHz}\) pulse profiles in Figure 1 of Lewandowska et al. (2022), which were taken using the VLA in D configuration to resolve out the Crab Nebula and thus have high signal-to-noise ratio; we find these are all consistent with ours.
Figure 1: Folded pulse profile of the Crab Pulsar at \(1658.49\ \mathrm{MHz}\) from EK036 observations in \(512\) phase bins centered on the MP. At this frequency, 5 components: LFC, MP, IP, HFC1 and HFC2 are visible. In the left panel, the profiles are normalized to their peak MP component. As the HFC1 and HFC2 components (indicated by arrows) are very faint, we show the grey region of the left panel zoomed in by a factor of \(15\) in the right panel, with vertical lines marking the peak of these components.
At lower frequencies, the pulse profiles often show echo features (e.g., Driessen et al., 2019). At our frequencies, those are expected to be too weak at delays where they might be seen in the folded pulse profile, and indeed we see none. However, at frequencies like ours, echoes can still be seen in individual pulses. For instance, at \(1.4\;\mathrm{GHz}\), Crossley et al. (2004) saw that individual bright pulses all had an echo delayed at \(\sim\!50\;\mathrm{\mu s}\) (which had no counterpart at \(4.9\;\mathrm{GHz}\)). From aligning GPs before stacking them in our datasets, Lin et al. (2023) also saw hints of echo features within \(\sim\!25\;\mathrm{\mu s}\) of the peaks of GPs in EK036 B and D. In Section 4.6, we confirm echoes in our data using a more careful analysis, finding that for EK036 D faint echoes are visible out to to \(\sim\!300\;\mathrm{\mu s}\).
## 4 Giant Pulses
### Search
In Lin et al. (2023), we searched for GPs by flagging peaks above \(8\sigma\) in a \(16\;\mathrm{\mu s}\) wide running average of the intensity time stream. While we reliably found GPs, the long time window meant we could not distinguish between bursts arriving in quick succession within that time window. Hence, the previous technique was unsuitable for one of our goals, of measuring arrival time differences between bursts, including between the microbursts that GPs sometimes are composed of. Below, we describe a revised technique, which allows us to more reliably identify multiple bursts (see Figure 2). Unsurprisingly, with our new technique we detected more multiple bursts than we had previously, as can be seen by comparing numbers listed in Section 6.3 of Lin et al. 2023) with those in Table 3.
For every pulsar period in the EK036 dataset, we take \(2.0\;\mathrm{ms}\) snippets of baseband data centered at the MP and
\begin{table}
\begin{tabular}{l l l l l} \hline \hline \multicolumn{1}{c}{
\begin{tabular}{c} Pulse \\ Comp. \\ \end{tabular} } & Obs./ & Amplitude & Pulse Phase & FWHM \\ & Ref. & (\%) & (deg.) & (deg.) \\ \hline LFC\(\dots\) & A & 3.6(3) & \(-38.0(3)\) & 7.5(6) \\ & B & 3.35(17) & \(-37.67(19)\) & 7.7(4) \\ & C & 3.7(2) & \(-37.2(3)\) & 7.7(6) \\ & D & 3.9(2) & \(-37.8(2)\) & 8.1(5) \\ & H15 & \(\dots\) & \(-35.78(14)\) & 7.2(12) \\ MP \(\dots\) & A & & & 2.786(11) \\ & B & & & 2.708(7) \\ & C & & & 2.756(11) \\ & D & & & 2.836(9) \\ & H15 & & & 3.9(11) \\ IP\(\dots\) & A & 15.2(4) & 145.38(4) & 3.48(10) \\ & B & 15.2(2) & 145.28(3) & 3.59(7) \\ & C & 15.3(4) & 145.25(4) & 3.46(10) \\ & D & 14.4(3) & 145.28(4) & 3.59(8) \\ & H15 & \(\dots\) & 145.25(4) & 5.4(11) \\ HFC1\(\dots\) & A & 0.58(13) & 203(3) & 28(7) \\ & B & 0.88(9) & 198.4(13) & 25(3) \\ & C & 0.68(12) & 194(3) & 34(7) \\ & D & 0.94(11) & 196.2(15) & 36(5) \\ & H15 & \(\dots\) & 198.2(8) & 25(5) \\ HFC2\(\dots\) & A & 1.5(2) & 259.7(8) & 11.8(19) \\ & B & 1.19(14) & 259.2(7) & 11.7(16) \\ & C & 1.23(19) & 257.7(9) & 12(2) \\ & D & 1.51(15) & 259.8(7) & 14.8(16) \\ & H15 & \(\dots\) & 259.1(4) & 11.6(12) \\ \hline \end{tabular} Note. –Amplitudes and phases are relative to the MP. H15 refers to Hankins et al. (2015), and corresponding values are from evaluating the fits presented in his Tables 2 and 3 at our central observing frequency of \(1658.49\;\mathrm{MHz}\). The phases for the LFC and IP have been extrapolated to MJD 57607 (midway between EK036 A and D) using \(d\phi/dt\) values from Lyne et al. (2013). Numbers in parentheses are \(1\sigma\) uncertainties in the last digit.
\end{table}
Table 2: Properties of the Pulse Profile Components.
Figure 2: Sample MP pulse rotations with GPs as detected by our algorithm (see Section 4.1 for details), shown at a time resolution of \(1.25\;\mathrm{\mu s}\). _Top_: Single pulse with scattering tail. _Middle_: Two pulses, each with their own scattering tail. _Bottom_: A profile showing the difficulties inherent in classifying pulses: our algorithm found three pulses, but if another algorithm were to classify this as two or four pulses, that would also seem reasonable.
IP component phase windows (roughly \(2\) times the size of the pulse component determined from the folded pulse profile) and create pulse intensity stacks for each component4. We average these stack across the eight frequency bands and bin over 10 time samples, or \(0.625~{}\mu\)s, a value chosen to be large enough for a reliable GP detection yet well less than the scattering timescale of \(\sim\)\(5~{}\mu\)s during these observations (Lin et al., 2023). To detect GPs, we first subtract the off-pulse region (determined from the \(0.5~{}\mathrm{ms}\) region on either side of each pulse stack), then filter with a uniform filter of size \(5\) (\(3.125~{}\mu\)s), and finally record all samples above a detection threshold of \(5\sigma\).
Footnote 4: We only search for GPs inside these windows since Lin et al. (2023) found none outside for the same dataset.
To turn these sets of above-the-noise locations into detections of individual GPs, we use the following three-step process5. First, we connect detections within \(8\) samples (\(5~{}\mu\)s, i.e., of order the scattering time), since those are likely related. Second, we remove detections spanning \(4\) samples (\(2.5~{}\mu\)s) or less, since these are likely spurious. Third, we increase the width of a detection by \(4\) samples (\(2.5~{}\mu\)s) on either side, mostly to ensure that if we integrate over the mask, we will capture most of the flux independent of pulse strength. With this procedure, the minimum final pulse width is \(8.125~{}\mu\)s, slightly larger than the scattering timescale, and we confidently detect pulses above a threshold of \(\sim\)\(0.15~{}\mathrm{kJy}~{}\mu\)s. The brightest GP we detect has a fluence of \(\sim 560~{}\mathrm{kJy}~{}\mu\)s. With our relatively high initial detection threshold, we do not find any GPs outside our pulse windows, suggesting that we have no false detections in our sample. Nevertheless, as can be seen from the overall pulse statistics in Table 1, we find many GPs, about \(2-3\) per second or about one for every dozen pulsar rotations.
Footnote 5: Using the binary_closing, binary_opening and binary_dilation functions, respectively, from scipy’s multidimensional image processing functions (Virtanen et al., 2020).
In some pulse rotations, we detect more than one distinct GP, where "distinct" means that the pulse is separated by at least \(5~{}\mu\)s (roughly the scattering timescale) from another pulse at our detection threshold. Here, we note that whether or not a GP is detected as single or multiple depends on the detection threshold: a GP classified as a single one at our threshold might be classified as separated at a higher threshold if it has two bright peaks with some flux in between (e.g., because the scattering tail of the first peak overlaps with the start of the next one, or a weaker burst fills in the space in between). This dependence on detection threshold may explain why Bhat et al. (2008) found no pulses wider than \(10~{}\mu\)s, as they took a high detection cutoff, of \(3~{}\mathrm{kJy}~{}\mu\)s. This kind of arbitrariness seems unavoidable given the variety in pulse shapes that we see; it often is a rather subjective decision on what to take as a single bursts. To give a sense, we show in Figure 2 an example of a pulse rotation with a single burst as well as two examples of rotations with multiple bursts. In Section 4.5, we estimate the fraction of multiple bursts that is causally related from the statistics of pulse separations.
### Rates
With the high sensitivity of the phased EVN array, we detected a total of \(65951\) GPs over \(7.32~{}\mathrm{hr}\), implying an average detection rate of \(2.5~{}\mathrm{s}^{-1}\). From Table 1, one sees that the rates are not the same for each epoch. Comparable detection rates are seen for both MP and IP GPs in EK036 A and C, but those are about a factor \(2\) smaller than the rates for EK036 B and D (which are comparable to each other).
Similar changes in detection rate were found for bright pulses by Lundgren et al. (1995) at \(800~{}\mathrm{MHz}\), Bera & Chengalur (2019) at \(1330~{}\mathrm{GHz}\) and by Kazantsev et al. (2019) at \(111~{}\mathrm{MHz}\). Lundgren et al. (1995) suggests that almost
Figure 3: GP pulse detection rates in each EK036 observation. Times when the telescope was not observing the Crab Pulsar are shaded grey. The MP (blue) and IP (orange) detection rates appear to scale together and are relatively constant across each observation.
certainly, these are due to changes in the scattering screen, which are known to cause changes in the scattering time on similar timescales and are expected to cause changes in magnification as well. To verify that there are no variations at shorter timescales, we calculated rates at roughly \(5\,\mathrm{min}\) intervals. As can be seen in Figure 3, we find that in a given epoch, the rates are indeed steady.
### Fluences
The fluence distribution of the Crab Pulsar's GPs is typically described by power-law approximations to the reverse cumulative distribution,
\[N_{\mathrm{GP}}(E>E_{0})=CE_{0}^{\alpha}, \tag{1}\]
where \(\alpha\) is the power-law index, \(C\) a proportionality constant, and \(E_{0}\) the GP fluence such that \(N_{\mathrm{GP}}(E>E_{0})\) is the occurrence rate of GPs above \(E_{0}\). For our data, one sees in Figure 4, that for all observations the distributions indeed appear power-law like at high fluence, with \(\alpha\approx-2.0\) and \(-1.6\) for MP and IP, respectively. These values are roughly consistent with values found at similar frequencies: e.g., Popov & Stappers (2007) find \(-1.7\) to \(-3.2\) for MP GPs and \(-1.6\) for IP GPs at \(1197\,\mathrm{MHz}\), and Majid et al. (2011) finds \(\alpha=-1.9\) for the combined MP and IP distribution at \(1664\,\mathrm{MHz}\).
However, as noted by Hankins et al. (2015) already, the power-law indices show large scatter and should be taken as roughly indicative only, showing, e.g., that at higher frequencies, very bright pulses are relatively rare. Indeed, in our data, like in more sensitive previous studies (e.g., Lundgren et al., 1995; Popov & Stappers, 2007; Bhat et al., 2008; Karuppusamy et al., 2010), the fluence distribution clearly flattens at lower fluences. At the very low end, this is because our detection method misses more pulses, but the changes above \(\sim 0.2\,\mathrm{kJy}\,\mathrm{\mu s}\) are real. This turnover may at least partially explain why a variety of power-law indices was found previously, as the measured index will depend on what part of the fluence distribution is fit (which will depend also on the magnification by scattering), as well as why for very high fluences, well away from the turn-over, the power-law index seems fairly stable (Bera & Chengalur, 2019).
Comparing the distributions for the different epochs, one sees that they are very similar except for a shift left or right in the figure. This confirms that the differences in rates seen between the epochs are due differences in magnification due to scintillation (and not due to the Crab Pulsar varying the rate at which pulses are emitted, which would, to first order, shift the distributions up and down).
As the fluence distributions looked roughly parabolic in log-log space, we also show cumulative log-normal distributions in Figure 4, of the form,
\[N_{\mathrm{GP}}(E>E_{0})=\frac{A}{2}\left[\mathrm{erfc}\left(\frac{\ln E_{0}- \mu}{\sigma\sqrt{2}}\right)\right], \tag{2}\]
where \(A\) is a scale factor, \(\mu\) and \(\sigma\) are the mean and standard deviation of \(\ln E_{0}\), and \(\mathrm{erfc}\) is the complementary error function. One sees that these describe the observed cumulative distributions quite well.
Figure 4: Reverse cumulative GP fluence distribution showing the occurrence rates of GPs. For comparison, power-law distributions (solid black lines) and log-normal distributions (dashed black line) are shown, with indices \(\alpha\) and widths \(\sigma\) as listed in the legend.
If the intrinsic distributions were log-normal, it would imply that especially for the MP, most of the flux is already captured and that the total rate of GPs is not much larger than our detection rate. For the log-normal distribution shown in Figure 4, for the MP, \(A=2.7\ \mathrm{s}^{-1}\) and the mean GP fluence is \(\langle E\rangle=\exp(\mu+\frac{1}{2}\sigma^{2})=1.2\ \mathrm{kJy\,\mu s}\) and only 1.5% of the total flux is below \(0.15\ \mathrm{kJy\,\mu s}\), while for the IP, \(A=1.6\ \mathrm{s}^{-1}\) and \(\langle E\rangle=0.24\ \mathrm{kJy\,\mu s}\), and 13% of the flux is below.
We can verify whether our MP GPs account for most of the flux by calculating pulse profiles with and without removing pulse rotations where GPs are detected. As can be seen in Figure 5, significant flux remains in both MP and IP. For the MP, even though the remaining signal is brighter in epochs B and D, the fraction is lower: about 18% in B and D, in comparison with 23% in A and C. This again can be understood if the larger detection rate is due to an overall magnification: a larger fraction of the pulses - and hence of the total flux - is detected.
Our result is similar (but more constraining) than that of Majid et al. (2011), who showed that at least \(54\%\) of overall pulsed energy flux for the Crab Pulsar is emitted in the form of GPs. But it is in contrast for what is seen by Abbate et al. (2020) for PSR J1823\(-\)3021A, where the detected GPs make up only a small fraction of the integrated pulse emission (\(4\%\) and \(2\%\) for their C1 and C2 components, respectively), and by Geyer et al. (2021) for PSR J0540\(-\)6919, where the detected GPs only make up \(7\%\) of the total flux. This might indicate a difference in the emission process. As these authors noted, however, a larger population of undetected GPs may still be hidden below their detection threshold.
For our observations, for both MP and IP, the residual flux is much larger than expected based on the log-normal distribution, thus indicating that the true fluence distribution has more pulses at low fluence (many more for the IP); if additional pulses were emitted also in rotations that we do not detect them, their typical fluence would be the residual flux integrated over one cycle, which is \(\sim 25\ \mathrm{Jy\,\mu s}\) for MP and a little less for IP. This is well below our detection limit, so consistent in that sense, but from the distributions shown in Figure 4, one would expect a much smaller rate than once per pulse period at \(25\ \mathrm{Jy\,\mu s}\). This might suggest that there are even more but typically fainter bursts (note that it cannot be fainter bursts accompanying the GPs we already detect, since we excluded the full rotations in calculating the resid
Figure 5: Mean and median MP and IP pulse profiles obtained using all pulse rotations (in blue and orange, respectively) and using only those in which no GPs were detected (green and red, respectively) in \(6.25\ \mathrm{\mu s}\) bins. Note that because the noise in an individual profile is not normally distributed, but rather follows a \(\chi_{k}^{2}\) distribution, the median is slightly below zero in the off-pulse region, by \((1-2/3k)^{3}-1\simeq-6/9k\simeq-0.0002\) of the SEFD of \(\sim\!150\ \mathrm{Jy}\) (Section 2), or \(\sim\!-0.03\ \mathrm{Jy}\) given \(k=3200\) degrees of freedom (complex dedispersed timestream squared, averaged over 2 polarizations, 8 bands, and 100 time bins).
ual emission), or that there is some steady underlying emission. It would be worthwhile to test this with more sensitive future observations.
### Pulse Phases
Defining the time of arrival of a GP as the time when an increase in flux is first detected, the longitude windows where MP and IP GPs occur have total widths of \(\sim 680\)\(\mu\)s and \(860\)\(\mu\)s (or \(\sim\!7\fdg 3\) and \(\sim\!9\fdg 2\)), respectively (averaged over the four epoch). As can be seen in Figure 6, the majority of GPs occur within much narrower windows: the root-mean-square deviations around the mean arrival phases are \(\sim\!100\)\(\mu\)s and \(\sim\!130\)\(\mu\)s (or \(\sim\!1\fdg 1\) and \(\sim\!1\fdg 4\)), respectively. The number distribution is roughly Gaussian, with a slightly negative skewness (i.e., a longer tail toward earlier phases and thus with a mode towards later phases). This was also observed by Majid et al. (2011) at a similar frequency of \(1664\)\(\mathrm{MHz}\). In EKO36 D, a few MP pulses are detected beyond the range found in the other epochs. As we will discuss in Section 4.6, these "outlier" detections are due to echoes (hence, they are are omitted in our determinations of widths above).
In Figure 6, we also show the flux distributions as a function of pulse phase, including the median flux of the GPs detected in any given phase bin. One sees no obvious variation, i.e., no hint of, e.g., brighter pulses having an intrinsically narrower phase distribution. This suggests that only the probability of seeing a pulse depends on pulse phase. In our earlier work on these data, where we studied how the pulse spectra and their correlations are affected by scattering (Lin et al., 2023), we concluded that we resolved the regions from which the nanoshots that comprise individual GPs are emitted, and that this is most easily understood if the emitting plasma is ejected highly relativistically, with \(\gamma\simeq 10^{4}\) (as was already suggested by Bij et al., 2021). If so, the emission would be beamed to angles much smaller than the width of the phase windows, and the range of phases over which we observe GPs would reflect the range of angles over which plasma is ejected.
### Arrival Times
Several studies (e.g., Karuppusamy et al., 2010; Majid et al., 2011) have found that GPs in different rotations are not correlated, and that there is no correlation between MP and IP GPs, but that instead the distribution of the time delays between successive GPs follows an exponential distribution, as expected for a Poissonian process. Within a given cycle, though, multiple correlated microbursts can occur (Sallmen et al., 1999; Hankins and Eilek, 2007).
With our high sensitivity, we can investigate this in more detail. In Table 3 we show the number of rotations in which we detect multiple MP or IP bursts (i.e., double, triple etc.), as well as the number expected (listed only where larger than 0) for the case where all events are independent,
\[N_{n}=p_{n}N_{r}=\begin{pmatrix}N_{\mathrm{p}}\\ n\end{pmatrix}\left(\frac{1}{N_{r}}\right)^{n}\left(1-\frac{1}{N_{r}}\right)^{ N_{\mathrm{p}}-n}N_{r}, \tag{3}\]
where \(p_{n}\) is the probability of a given rotation to have \(n\) bursts (assuming a binomial distribution), \(N_{r}\) is the total number of rotations observed, and \(N_{\mathrm{p}}\) is the total number of bursts found (and where for numerical values we inserted numbers from Table 1: \(N_{\mathrm{p}}=N_{\mathrm{MP}}\) or \(N_{\mathrm{IP}}\) and \(N_{r}=t_{\mathrm{exp}}/P_{\mathrm{Crab}}\), where \(P_{\mathrm{Crab}}=33.7\)\(\mathrm{ms}\) is the rotation period of the pulsar). One sees that we detect significantly more multiples than expected by chance6, i.e., some of the detected pulses are composed of multiple, causally related microbursts.
Footnote 6: In Lin et al. (2023), we wrongly concluded the multiples were consistent with arising by chance. Sadly, we used incorrect estimates of \(N_{n}\).
In principle, one could estimate the number of independent bursts, \(N_{\mathrm{p}}^{\mathrm{ind}}\), in each epoch by subtracting from \(N_{\mathrm{p}}\) the excess pulses from Table 3, but this would not be quite correct since the excess would be relative to estimates made using the total number of observed pulses \(N_{\mathrm{p}}\), not the (lower) number of independent pulses \(N_{\mathrm{p}}^{\mathrm{ind}}\). One could iterate, but an easier, unbiased estimate of \(N_{\mathrm{p}}^{\mathrm{ind}}\) can be made using the observed fraction of rotations in which we do not see any bursts, which should equal \(N_{0}/N_{r}=p_{0}=\left(1-1/N_{r}\right)^{N_{\mathrm{p}}^{\mathrm{ind}}}\). Solving for \(N_{\mathrm{p}}^{\mathrm{ind}}\), we find that \(N_{\mathrm{p}}^{\mathrm{ind}}=fN_{\mathrm{p}}\) with fractions \(f\) that are consistent between all epochs, at \(91.8\pm 0.2\) and \(95.2\pm 0.5\)% for MP and IP, respectively. Hence, about 8 and 5% of the detected MP and IP pulses, respectively, are extra components. Or, as fractions of independent MP and IP pulses, \((6,1,0.12)\) and \((4,0.3,0.0)\%\), respectively, are causally related double, triple, or quadruple microbursts.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Observation & \multicolumn{3}{c}{MP} & \multicolumn{3}{c}{\(\dots\)} & IP & \multicolumn{3}{c}{\(\dots\)} \\ Code & 2 & 3 & 4 & 5 & 6 & 2 & 3 & 4 \\ \hline \hline EK036 A & 1820(599) & 200(12) & 24 & 0 & 0 & 144(17) & 4 & 2 \\ EK036 B & 1431(611) & 170(18) & 22 & 3 & 1 & 237(43) & 16 & 2 \\ EK036 C & 611(213) & 67 (4) & 6 & 0 & 0 & 54( 7) & 4 & 0 \\ EK036 D & 934(395) & 117(10) & 23 & 6 & 1 & 116(19) & 9 & 0 \\ \hline \end{tabular} Note. – Numbers in parentheses are those expected if bursts occur randomly; for that case, one does not expect to find any rotations with 4 or more MP bursts or 3 or more IP bursts. Note that our GP detection method does not differentiate between microbursts and echoes, which becomes important for a few very bright pulses in EKO36 D, for which echoes were present. In addition, we are not able to distinguish microbursts that occur very close together in time. The number of detections differ from Lin et al. (2023) as a different, more robust, search algorithm is implemented here (see Section 4.1).
\end{table}
Table 3: Number of Rotations with Multiple Bursts.
To investigate the distributions further, we show histograms of the time delay between pulses in Figure 7. Overdrawn are expectations for randomly arriving, independent pulses. We constructed these by bootstrapping, where we repeatedly reassign new random pulse cycles to our observed sets of pulses, and then recalculate the time delay distributions. Note that in our bootstraps, we do not randomize pulse phase, so that the observed phase distribution is correctly reflected in the time delays. One sees that as a function of pulse cycle (right column panels for MP and IP GPs in Fig. 7), the time delay distributions are not well defined.
Figure 6: MP GP and IP GP fluence and count distributions as a function of pulse phase for each EK036 observation. We used pulse phase bins of \(0.1\%\) and fluence bins of \(0.1\ \mathrm{dex}\). The light purple line in the fluence panels show the median for bins with more than \(2\) detected pulses.
ure 7), the observed histograms follow the expected exponential distribution (although the observed counts are slightly lower than the expected ones because not all pulses are independent, as is implicitly assumed in the bootstraps).
For the time delays between pulses that occur in the same cycle (left column panels for MP and IP GPs in Figure 7), the observed distributions are very different from those expected for randomly occurring bursts. One sees a large peak at short delays, representing the excess microbursts from Table 3, following a roughly exponential distribution with a mean time between bursts of \(\sim 30\;\mu\)s or so. Intriguingly, at somewhat larger time difference, there seem to be fewer bursts than expected for independent events. This suggests that while a given detection has an enhanced probability of being in a group of causally related microbursts, the occurrence of a burst also suppresses the likelihood of another, independent, burst being produced in the same rotation. Thus, our results confirm that GPs are often composed of multiple microbursts, and they indicate that another, independent GP is less likely to occur right after.
### Scattering Features
In Figure 6, one sees that in EK036 D, several MP GPs were detected at pulse phases quite far from the median phase. To investigate this, we looked at the arrival times of all GPs detected in EK036 D (see left panel of Figure 8). We found that the outliers occurred in two pulse rotations, which turned out to contain the brightest GPs in EK036 D. Looking at the pulse profiles of these brightest GPs, one sees that they are very similar (see right panels of Figure 8). In fact, closer
Figure 7: Time delays between successive GPs for the MP (in blue) and IP (in orange) components for each EK036 observation. On the left MP and IP columns, time delays within a pulse rotation are shown with bins of \(10\;\mu\)s and \(20\;\mu\)s for the MP and IP respectively; the low counts in the first bin reflect the minimum separation of \(8.75\;\mu\)s between detected pulses. On the right MP and IP columns, time delays in pulse rotations are shown with bins of \(1\) rotation and \(4\) rotations for the MP and IP respectively. The red lines show the average time delay histograms for \(1000\) bootstrap iterations, in which we randomized the rotation in which a pulse was seen (but not the phase, to keep the observed phase distribution).
examination reveals that all of the brightest GPs detected in EK036 D show similar pulse profiles. This implies that the pulses far from the median pulse phase arrive late because they are actually weak echoes of the main burst, with amplitudes down to \(\sim 0.4\%\) of the peak flux and delays up to \(\sim 300~{}\mu\)s.
In Figure 9, we show singular value decomposition (SVD) approximations of the average MP GP profile for each epoch (for the IP, too few bright pulses were available). This was created from MP GP rotations with peak intensities greater than \(200~{}\mathrm{Jy}\) and seemingly single peaks, aligned using time offsets found by correlation with a reference pulse. To avoid giving too much weight to the brightest pulses, and thus risking that remaining substructure enters the average profile, we normalized each rotation by the intensity at the correlation maximum before doing the SVD. One sees that all profiles are fairly sharply peaked, but sit on top of a base, which has the expected asymmetric part extending to later time due to scattering, as well as a more symmetric component, likely resulting from the collective effect of faint microbursts. Comparing the epochs, one sees that for EK036 A-C, the profile dropoff is relatively smooth and becomes undetectable after \(\sim\!200~{}\mu\)s, while in EK036 D, the tail is much longer, extending to \(\sim\!400~{}\mu\)s, and is much more bumpy.
Almost certainly, all bumps are echoes, including those at shorter delay in EK036 B (more clearly seen in the linear-scale plots in Lin et al.2023), Indeed, looking carefully at the stack of profiles in Figure 9, one sees that the echoes in EK036 D drift in time, moving slightly further away from the MP during the observation, with perhaps even a hint that echoes further away from the main bursts drift faster than those closer in. (Note that this stack is not completely linear in time, although given that the GP detection rate is roughly constant throughout, it is not far off.) This change in time is expected for echoes off a structure with changing distance from the line of sight, and indeed has been seen for a very prominent echo by Backer et al. (2000); Lyne et al. (2001). Overall, our observations suggests echoes are common, as also concluded from daily monitoring at \(600~{}\mathrm{MHz}\) by Serafin-Nadeau et al. (2023, in prep.).
Figure 8: _Left_: MP GPs and IP GPs detected in the EK036 D data. The gray shaded regions indicate when the telescope was not observing the Crab Pulsar and the black vertical lines mark our MP GP and IP GP windows. In the inset, we show two pulse rotations containing the brightest GPs “A” and “B”, in red and orange respectively. _Right, Top_: Waterfalls of the two brightest pulses in EK036 D with \(1~{}\mu\)s time resolution and \(1~{}\mathrm{MHz}\) frequency resolution. _Right, Bottom_: Pulse profile of the two brightest pulses in EK036 D with \(1~{}\mu\)s time resolution scaled to the peak of each pulse. Pulses “A” and “B” show similar features and we conclude that during the EK036 D observations, weak echoes were present at large delays.
## 5 Summary of Conclusions
The fine time resolution and high sensitivity in our beam-formed EVN data allowed us to confidently detect \(65951\) GPs with fluences above \(\sim 150\ \mathrm{Jy\ \mu s}\) over a short period of \(7.32\mathrm{hr}\). Within each of our four observations, we found that the GP detection rates are fairly constant, but that between epochs they differ by a factor of \(\sim\!2\). Similar changes were seen previously, and were suggested by Lundgren et al. (1995) to reflect changes in overall magnification of the scattering screens along the line of sight.
The changes in magnification are consistent with the pulse fluence distributions, which are power-law like at high fluence, but with a flattening at lower fluences; the distributions from the different epochs can be shifted to each other with a change in fluence scale. We noted that the fluence distributions are similar to what is expected for log-normal distributions, but found that the residual signals seen in the GP phase windows after removing the GPs we detected were larger than expected if the log-normal distribution continued also below our detection limit. Nevertheless, it suggests that with only somewhat more sensitive observations, it should be possible to get a fairly complete sampling of all GPs that contribute to the average flux, at least for the MP component.
Analyzing the pulse phase distributions, we confirm previous observations showing that the majority of GPs occur within very narrow phase windows. Furthermore, we observe no significant variations in the median flux distributions as a function of pulse phase. This suggests that it is the probability of observing a pulse that depends on pulse phase, not its energy, implying that the angle within which a pulse is emitted is much narrower than the rotational phase window, as expected if the plasma causing them is travelling highly relativistically (Bij et al., 2021; Lin et al., 2023).
With our high detection rates, we were able to investigate the distribution of time delays between successive bursts within the same pulse rotation. We detect a larger number than expected if all bursts were due to a Poissonian process, and infer that \(\sim\!5\%\) of bursts come in groups of 2 or 3 causally related microbursts, with a typical separation in time of \(\sim\!30\ \mu\)s.
Additionally, our high sensitivity revealed weak echo features for individual bright pulses, which drift slightly but sig
Figure 9: _Line plots_: SVD approximation of the MP pulse profile for all observations. In EK036 B, echoes are seen close to the profile’s peak (see Lin et al., 2023 for more details). The profile for EK036 D shows multiple weak echoes up to \(\sim\!300\ \mu\)s. _Image_: The MP pulse stack for EK036 D, using a logarithmic colour scale to bring out faint features. Each pulse is aligned by correlating with the rotation with the brightest pulse in EK036 D (which is appears to be a simple single microburst) and then normalized by the intensity at time \(0\) (the black dashed line). The echoes appear to move out over time, as one can see by comparing the location of the most prominent faint echo with the dashed white vertical line near it (time is increasing both upwards and to the right in this image).
nificantly even over our timescales of just a few hours. We infer that echo events are not rare.
Given our findings, we believe even more sensitive follow-up studies of the Crab Pulsar would be very useful. This would be possible using more small dishes (spaced sufficiently far apart that the Crab Nebula is well-resolved) and by recording a larger bandwidth.
## Acknowledgements
We thank the anonymous referee for their comments, which improved the clarity of this manuscript. We thank the Toronto Scintillometry group, and in particular Nikhil Mahajan, for useful discussion on GP statistics. Computations were performed on the Niagara supercomputer at the SciNet HPC Consortium (Loken et al., 2010; Ponce et al., 2019). SciNet is funded by: the Canada Foundation for Innovation; the Government of Ontario; Ontario Research Fund - Research Excellence; and the University of Toronto. M.Hv.K. is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) via discovery and accelerator grants, and by a Killam Fellowship.
The European VLBI Network (EVN) is a joint facility of independent European, African, Asian, and North American radio astronomy institutes. Scientific results from data presented in this publication are derived from the following EVN project codes: EK036 A-D.
astropy (Astropy Collaboration et al., 2013, 2018, 2022), Baseband (Van Kerkwijk et al., 2020), CALC10 (Ryan & Vandenberg, 1980), numpy (Harris et al., 2020), matplotlib (Hunter, 2007), pulsarbat (Mahajan & Lin, 2023), scipy (Virtanen et al., 2020), tempo2 (Hobbs & Edwards, 2012).
| ```
Crab PulsarのEVNデータ4エポックを解析しました。Crab Nebulaを解像することで高感度を得て、foldedprofile内の微弱な高周波成分も検出できます。また、65951個の巨大なパルスを検出しています。これらは、その発生率、 fluence、相、到着時間分布を調査するために使用されています。主パルス成分に関しては、私たちの巨大なパルスは約80%の総輝度を占めています。これは、巨大なパルスエネルギー分布がほぼ完全に含まれていることを示唆していますが、その分布が残りの20%まで拡張されることは、微弱なブーストの大量発生を必要とする可能性があることを示唆しています。単一回転のブーストの到着時間差を観察することで、その間隔が近い巨大パルスが見つかる確率は、ランダムなブースト発生よりも増加しています。ある巨大パルスは、 |
2304.00050 | kNN-Res: Residual Neural Network with kNN-Graph coherence for point
cloud registration | In this paper, we present a residual neural network-based method for point
set registration that preserves the topological structure of the target point
set. Similar to coherent point drift (CPD), the registration (alignment)
problem is viewed as the movement of data points sampled from a target
distribution along a regularized displacement vector field. While the coherence
constraint in CPD is stated in terms of local motion coherence, the proposed
regularization term relies on a global smoothness constraint as a proxy for
preserving local topology. This makes CPD less flexible when the deformation is
locally rigid but globally non-rigid as in the case of multiple objects and
articulate pose registration. A Jacobian-based cost function and
geometric-aware statistical distances are proposed to mitigate these issues.
The latter allows for measuring misalignment between the target and the
reference. The justification for the k-Nearest Neighbour(kNN) graph
preservation of target data, when the Jacobian cost is used, is also provided.
Further, to tackle the registration of high-dimensional point sets, a constant
time stochastic approximation of the Jacobian cost is introduced. The proposed
method is illustrated on several 2-dimensional toy examples and tested on
high-dimensional flow Cytometry datasets where the task is to align two
distributions of cells whilst preserving the kNN-graph in order to preserve the
biological signal of the transformed data. The implementation of the proposed
approach is available at https://github.com/MuhammadSaeedBatikh/kNN-Res_Demo/
under the MIT license. | Muhammad S. Battikh, Dillon Hammill, Matthew Cook, Artem Lensky | 2023-03-31T18:06:26 | http://arxiv.org/abs/2304.00050v2 | # kNN-Res: Residual Neural Network with kNN-Graph coherence for point cloud registration
###### Abstract
In this paper, we present a residual neural network-based method for point set registration that preserves the topological structure of the target point set. Similar to coherent point drift (CPD), the registration (alignment) problem is viewed as the movement of data points sampled from a target distribution along a regularized displacement vector field. While the coherence constraint in CPD is stated in terms of local motion coherence, the proposed regularization term relies on a global smoothness constraint as a proxy for preserving local topology. This makes CPD less flexible when the deformation is locally rigid but globally non-rigid as in the case of multiple objects and articulate pose registration. A Jacobian-based cost function and geometric-aware statistical distances are proposed to mitigate these issues. The latter allows for measuring misalignment between the target and the reference. The justification for the k-Nearest Neighbour(kNN) graph preservation of target data, when the Jacobian cost is used, is also provided. Further, to tackle the registration of high-dimensional point sets, a constant time stochastic approximation of the Jacobian cost is introduced. The proposed method is illustrated on several 2-dimensional toy examples and tested on high-dimensional flow Cytometry datasets where the task is to align two distributions of cells
whilst preserving the kNN-graph in order to preserve the biological signal of the transformed data. The implementation of the proposed approach is available at [https://github.com/MuhammadSaeedBatikh/kNN-Res_Demo/](https://github.com/MuhammadSaeedBatikh/kNN-Res_Demo/) under the MIT license.
## 1 Introduction
Point set registration is a widely studied problem in the field of computer vision but also arises in other fields e.g. bioinformatics as is discussed below. The problem involves aligning a deformed target set of \(d\)-dimensional points to another reference point set by applying a constrained transformation. This alignment allows for improved comparison and analysis of the two sets of points and is used in a variety of fields including object tracking, body shape modeling, human pose estimation, and removal of batch effects in biological data. [1, 2, 3, 4, 5]
Point set registration techniques are typically categorized based on two main properties, first, whether the technique is a correspondence-based or a correspondence-free technique, and second, whether the estimated transformation is rigid or non-rigid. Correspondence-based techniques require the availability of correspondence information (e.g. labels) between the two point sets, while correspondence-free, sometimes called simultaneous pose and correspondence registration, does not require such information and therefore is considered a significantly more difficult problem. Rigid registration techniques are also generally simpler. A rigid transformation is an isometric transformation that preserves the pairwise distance between points and such transformation is typically modeled as a combination of rotation and translation. Several rigid registration techniques have been proposed in [6, 7, 8, 9, 10, 11, 12, 13, 14]. Assuming the transformation is rigid, however, makes the types of deformations that could be handled quite limited. Non-rigid transformations allow for more flexibility; however, this makes the problem ill-posed as there are an infinite number of transformations that could align two point sets, thus, non-rigid registration techniques employ additional constraints.
### Problem Formulation
In this section, we formulate the alignment problem. Inspired by CPD [15], we view an alignment method as finding a map \(\phi\) that transforms data points sampled from an underlying distribution \(Q\) to distribution \(P\) in such a way that preserves the topological structure of data sampled from \(Q\). This is an ill-posed density estimation problem, therefore, we require an additional desiderium for \(\phi\) to be as simple as possible. In this context, we call a map \(\phi\) simple if it is close to the identity transformation. Importantly, this could be visualized as data points sampled from \(Q\) moving along a regularized displacement vector field \(F\).
More formally, we denote two sets of \(d\)-dimensional vectors (points), a ref
erence point set \(\mathbf{R}=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{n}\}\), and target point set \(\mathbf{T}=\{\mathbf{y}_{1},\mathbf{y}_{2},...,\mathbf{y}_{m}\}\), generated by a probability distributions \(P\) and \(Q\) respectively. Additionally, a \(k\)-Nearest Neighbour (kNN) graph is associated with (or constructed from) the set \(\mathbf{T}\) which must be preserved after transformation. A kNN graph for set \(\mathbf{T}\) is a directed graph such that there is an edge from node \(i\) to \(j\) if and only if \(\mathbf{y}_{j}\) is among \(\mathbf{y}_{i}\)'s \(k\) most similar items in \(\mathbf{T}\) under some similarity measure \(\rho\).
Thus, the goal of an alignment method, given the sets \(\mathbf{R}\) and \(\mathbf{T}\) in a matrix form of \(X\in\mathbf{R}^{n\times d}\) and \(Y\in\mathbf{R}^{m\times d}\) respectively, is finding a transformation \(\phi\) parameterized by \(\theta\) such that:
\[\hat{\theta}=\arg\max_{\theta}D(\phi(Y;\theta),X) \tag{1}\]
subject to the constraints:
\[\texttt{kNN}_{g}(\phi(Y;\theta))=\texttt{kNN}_{g}(y) \tag{2}\]
where \(D\) is a statistical distance that measures the difference between two probability distributions.
### Limitations of existing approaches
A classic example of a such constraint is found in CPD [15] and its extensions [16, 17, 18]. CPD uses a Gaussian Mixture Model to induce a displacement field from the target to source points and uses local motion coherence to constrain the field such that nearby target points move together. CPD achieves this however via a global smoothing constraint which makes it locally inflexible, and therefore unsuitable for articulated deformations in 3D human data, scenes with multiple objects, and biological data [19].
In this work, we introduce a Jacobian orthogonality loss and show that it is a sufficient condition for preserving the kNN graph of the data. Jacobian orthogonality introduced as a penalty \(|\mathbf{J}_{\mathbf{X}}^{\top}\mathbf{J}_{\mathbf{X}}-\mathbf{I}_{d}|\) where \(\mathbf{J}_{\mathbf{X}}\) is the Jacobian matrix at a point \(\mathbf{x}\) and \(\mathbf{I}_{d}\) is the \(d\times d\) identity matrix. The penalty has been proposed in other contexts as well, such as unsupervised disentanglement [20] and medical image registration [21, 22].
In [21], the finite difference method is employed to compute the Jacobian penalty for the B-splines warping transformation, and mutual information of corresponding voxel intensities is used as the similarity measure. Instead of using finite difference for the Jacobian penalty, which produces a numerical approximation of first-order derivatives, the authors of [22] derive an analytical derivative specific to the multidimensional B-splines case. Such approaches however are limited to low dimensions by the nature of the transformations used, the way in which the Jacobian penalty is computed, and their proposed similarity measures.
### Contributions
To address these limitations, we use Hutchinson's estimator [20, 23] for fast computation of the Jacobian loss for high-dimensional point clouds, a scalable residual neural network (ResNet) [24] architecture as our warping transformation, and geometry-aware statistical distances. The choice of ResNet with identity block \(\phi(x)=x+\delta(x)\) is natural since we view alignment similar to CPD as a regularized movement of data points along a displacement vector field; which in our case is simply \(\phi(x)-x=\delta(x)\). It is also worth mentioning that ResNets can learn identity mapping more easily. Further discussion on this choice is given in section 2.2. Moment-matching ResNet(MM-Res) [5] use a similar ResNet architecture with RBF kernel maximum-mean discrepancy as its similarity measure [25, 26], however, no topological constraints are provided to preserve the topological structure of the transformed data nor to limit the nature of the learned transformation as shown in Figure 1. Additionally, while maximum-mean discrepancy is a geometry-aware distance, we address some limitations by incorporating Sinkhorn divergence into our framework [27].
Figure 1: Stanford Bunny example showing the effect of the Jacobian penalty on the learned transformation.
To elaborate further, we first start by defining Maximum Mean Discrepancy (MMD):
\[\texttt{MMD}(\alpha,\beta):=\frac{1}{2}\int_{X^{2}}k(\mathbf{x},\mathbf{y})d \zeta(\mathbf{x})d\zeta(\mathbf{y}) \tag{3}\]
where \(\alpha,\beta\in M_{1}^{+}(X)\) are unit-mass positive empirical distributions on a feature space \(X\), \(\zeta=\alpha-\beta\), and \(k(\mathbf{x},\mathbf{y})\) is a kernel function. MM-Res uses an RBF kernel which is suitable for high-dimensional Euclidean feature spaces (e.g. to represent \(X\subset\mathbb{R}^{n}\)) and makes training complexity low as it scales up to large batches, nonetheless, such kernel blinds the model to details smaller than its standard deviation, and the networks' gradient suffers from the well-known vanishing gradient problem. One simple solution is to decrease the standard deviation of the kernel; however, this introduces another issue, namely, the target points will not be properly attracted to source points [28]. In practice, this makes such a framework incapable of learning simple deformations with sizable translations as we show in section 4. Optimal transport (OT) losses do not typically suffer from this issue and produce more stable gradients; however, such losses require solving computationally costly linear programs. A well-known efficient approximation of the OT problem is entropic regularized \(OT_{\epsilon}\)[29], for \(\epsilon>0\), it is defined as:
\[\texttt{OT}_{\epsilon}(\alpha,\beta):=\min_{\pi_{1}=\alpha,\pi_{2}=\beta}\int _{X^{2}}C(\mathbf{x},\mathbf{y})d\pi+\epsilon\texttt{KL}(\pi|\alpha\times\beta) \tag{4}\]
where \(C\) is a cost function (typically the Euclidean distance), \((\pi_{1},\pi_{2})\) denotes the two marginals of the coupling measure \(\pi\in M_{1}^{+}\) and KL is the KL-divergence. The solution for this formulation could be efficiently computed using the Sinkhorn algorithm as long as \(\epsilon>0\). It is clear that by setting \(\epsilon\) to 0, this minimization problem reduces back to standard OT. Sinkhorn divergence combines the advantages of MMD and OT and is defined as:
\[S_{\epsilon}(\alpha,\beta)=\texttt{OT}_{\epsilon}(\alpha,\beta)-\frac{1}{2}( \texttt{OT}_{\epsilon}(\alpha,\alpha)+\texttt{OT}_{\epsilon}(\beta,\beta)) \tag{5}\]
The authors of [29] show that:
\[\lim_{\epsilon\to 0}S_{\epsilon}(\alpha,\beta)=\texttt{OT}(\alpha,\beta) \tag{6}\]
and
\[\lim_{\epsilon\rightarrow\infty}S_{\epsilon}(\alpha,\beta)=\frac{1}{2} \texttt{MDD}_{-C}^{2}(\alpha,\beta) \tag{7}\]
where \(C\) is the kernel used by MMD.
In the following section, we review other related methods.
### Other related work
Several point cloud registration approaches have been proposed. Thin plate spline functions-based techniques preserve the local topology via local rigidity on the surface of a deformed shape; however, these approaches are not scalable
to large datasets and are typically limited to 3-dimensional point clouds [30, 31, 32, 33, 34, 35]. To address these limitations, a deep learning paradigm for point cloud registration has been adopted. Deep learning-based approaches can be divided into two categories, namely, features-based, and end-to-end learning. In features-based methods, a neural network is used as a feature extraction. By developing sophisticated network architectures or loss functions, they aim to estimate robust correspondences by the learned distinctive feature [30, 36, 37, 38]. While feature-based learning typically involves elaborate pipelines with various steps such as feature extraction, correspondence estimation, and registration, end-to-end learning methods combine various steps in one objective and try to solve the registration problem directly by converting it to a regression problem [39, 40]. For example, [39] employs a key point detection method while simultaneously estimating relative pose.
Another class of methods is Graph Matching techniques, which are quadratic assignment problems (QAP) [40]. The main challenge for such methods is finding efficient approximate methods to the otherwise NP-hard QAP. Congruent Sets Gaussian Mixture (CSGM) [41] uses a linear program to solve the graph-matching problem and apply it to solve the cross-source point cloud registration task. Another approach is a high-order graph [42] that uses an integer projection algorithm to optimize the objective function in the integer domain. Finally, Factorized Graph Matching (FGM) method [43] factorizes the large pairwise affinity matrix into some smaller matrices. Then, the graph-matching problem is solved with a simple path following optimization algorithm.
## 2 Proposed model
### Methodology
In our case, we parametrize the transformation \(\phi\) as a residual neural network and formulate the optimization problem as:
\[\mathcal{L}(\theta)=\mathcal{L}_{1}+\lambda\mathcal{L}_{2} \tag{8}\]
where \(\mathcal{L}_{1}\) is the alignment loss \(D(\theta(Y;\theta),X)\) and \(\lambda\) is a hyperparamater, and \(\mathcal{L}_{2}\) is the topology preserving loss:
\[\mathcal{L}_{2}=\frac{1}{m}\sum_{\mathbf{y}\in T}|\mathbf{J}_{X}^{\top} \mathbf{J}_{X}-\mathbf{I}_{d}| \tag{9}\]
where \(\mathbf{J}_{y}\) is the Jacobian matrix at points \(y\) and \(\mathbf{I}_{d}\) is the \(d\times d\) identity matrix. In section 2.4 we prove that the orthogonality of the Jacobian matrix is indeed a sufficient condition for preserving the kNN graph of the data. We use two statistical distances, namely, Sinkhorn divergences, and maximum mean discrepancy. Sinkhorn divergences is a computationally efficient approximation for the Wasserstein distance in high dimensions and converge to the maximum mean discrepancy.
\[\mathcal{L}_{1}(\theta)=S(\alpha,\beta)=\texttt{OT}_{2}(\alpha,\beta)-\frac{1}{2}( \texttt{OT}_{2}(\alpha,\alpha)+\texttt{OT}_{2}(\beta,\beta)) \tag{10}\]
where \(\texttt{OT}_{\epsilon}\) is the optimal transport with \(\mathcal{L}_{2}\)-norm cost, and \(\alpha\) and \(\beta\) are measures over reference and target distributions respectively. The measures \(\alpha\) and \(\beta\) are unknown and are only known via samples from \(\mathbf{R}\) and \(\mathbf{T}\) respectively. Although \(S_{\epsilon}(\alpha,\beta)\) interpolates to MMD as \(\epsilon\) goes to infinity, we still maintain an efficient standalone MMD distance for data where MMD performs better than the Wasserstein distance and therefore no need for the interpolation overhead. Specifically, we use Gaussian-based MMD:
\[\texttt{MMD}(\alpha,\beta):=\frac{1}{2}\int_{X^{2}}k(\mathbf{x},\mathbf{y})d \zeta(\mathbf{x})d\zeta(\mathbf{y}) \tag{11}\]
### Architecture
We use a simple ResNet identity block with a skip connection as our transformation where the output dimension is equal to the input dimension, and the output is calculated as such: \(\phi(\mathbf{y};\theta)=\mathbf{y}+\delta(\mathbf{y};\theta)\), where \(\delta\) is a standard multi-layer perceptron (MLP) with LeakyRelu activation functions and \(\theta\) represents the trainable weights of the network. The ResNet identity block has been chosen for two reasons: biasing \(\theta\) to have small values via weight decay or initializing the output layer using a distribution with mean zero and a small standard deviation minimizes the contribution of \(\delta(\mathbf{y};\theta)\) to the final transformation which makes \(\phi(\mathbf{y};\theta)\) close to the identity. Additionally, this follows the same recipe from CPD of viewing the alignment function as a smooth displacement field.
The ResNet identity block is chosen for the following two reasons. Biasing \(\theta\) to have small values via weight decay or initialization using a distribution with close to zero values minimizes the contribution of \(\delta(\mathbf{x}:\theta)\) to the final transformation which in turn makes \(\phi(\mathbf{x}:\theta)\) close to the identity by design. Additionally, since we take a similar approach to CPD by viewing the alignment transformation as a regularized movement of data point along displacement vector field \(F\); having a ResNet identity block is mathematically convenient since a displacement vector is a difference between the final position \(\phi(\mathbf{x}:\theta)\) (transformed point) and the initial position (data point) \(\mathbf{x}\) such that \(F(\mathbf{x})=\phi(\mathbf{x}:\theta)-\mathbf{x}=\delta(\mathbf{x}:\theta)\), therefore, we only need to worry about \(\delta(\mathbf{x}:\theta)\) instead of \((\phi(\mathbf{x}:\theta)-\mathbf{x})\) absent skip connection.
### Orthogonal Jacobian preserves kNN graph
In this section, we show that the orthogonality of the Jacobian matrix evaluated at data points is a sufficient condition for preserving the kNN graph of the data. A vector-valued function \(\mathcal{F}:\mathbb{R}_{n}\rightarrow\mathbb{R}_{n}\) preserves the kNN graph of data points \(X\in\mathbb{R}_{n}\) if, for every two points \(\mathbf{v}\) and \(\mathbf{w}\) that are in some small \(\epsilon\)-neighborhood of \(\mathbf{u}\), the following holds:
\[||\mathbf{u}-\mathbf{v}||_{2}^{2}<||\mathbf{u}-\mathbf{w}||_{2}^{2} \rightarrow||F(\mathbf{u}),F(\mathbf{v})||_{2}^{2}<||F(\mathbf{u}),F( \mathbf{w})||_{2}^{2}, \tag{12}\]
where \(||\cdot||_{2}^{2}\) is the squared Euclidian distance. Without loss of generality, we choose two points \(\mathbf{w}\), \(\mathbf{v}\) that lie in \(\epsilon\) neighborhood of point \(\mathbf{u}\) and linearize the vector field \(F\) around point \(\mathbf{u}\) such that:
\[F(\mathbf{x};\mathbf{u})\approx F(\mathbf{u})+\mathbf{J}_{\mathbf{u}}(\mathbf{x }-\mathbf{u}), \tag{13}\]
where \(\mathbf{J}_{\mathbf{u}}\) is the Jacobian matrix evaluated at point \(\mathbf{u}\).
The squared distance of \(\mathbf{u}\) and \(\mathbf{v}\) is:
\[||\mathbf{u}-\mathbf{v}||_{2}^{2}=(\mathbf{u}-\mathbf{v})^{\top}(\mathbf{u}- \mathbf{v})=\sum_{i}^{n}\left(\mathbf{u}_{i}-\mathbf{v}_{i}\right)^{2} \tag{14}\]
Similarly, the squared distance between \(F(\mathbf{u};\mathbf{u})\) and \(F(\mathbf{v};\mathbf{u})\) computes as follows
\[\begin{array}{rcl}||F(\mathbf{u};\mathbf{u})-F(\mathbf{v};\mathbf{u})||_{2 }^{2}&=&(F(\mathbf{u};\mathbf{u})-F(\mathbf{v};\mathbf{u})^{\top}(F(\mathbf{u} ;\mathbf{u})-F(\mathbf{v};\mathbf{u}))\\ &=&F(\mathbf{u})-F(\mathbf{u})-\mathbf{J}_{\mathbf{u}}(\mathbf{v}-\mathbf{u})^ {\top}(F(\mathbf{u})-F(\mathbf{u})-\mathbf{J}_{\mathbf{u}}(\mathbf{v}- \mathbf{u}))\\ &=&(\mathbf{J}_{\mathbf{u}}(\mathbf{v}-\mathbf{u}))^{\top}(\mathbf{J}_{ \mathbf{u}}(\mathbf{v}-\mathbf{u}))\\ &=&(\mathbf{v}-\mathbf{u})^{\top}\mathbf{J}_{\mathbf{u}}^{\top}\mathbf{J}_{ \mathbf{u}}(\mathbf{v}-\mathbf{u})\\ &=&(\mathbf{v}-\mathbf{u})^{\top}(\mathbf{v}-\mathbf{u})\end{array}\]
The last step follows from the orthogonality of \(\mathbf{J}_{\mathbf{u}}\) i.e. \((\mathbf{J}_{\mathbf{u}}^{\top}\mathbf{J}_{\mathbf{u}}=\mathbf{I})\)
### Jacobian Loss Via Finite Difference
Given a vector-valued function \(F:\mathbb{R}^{d}\rightarrow\mathbb{R}^{d}\), a data batch \(X\in\mathbb{R}^{m\times d}\), and the Jacobian \(\mathbf{J}_{X}\) of \(F\) at points \(\mathbf{X}\) is an \(\mathbb{R}^{m\times d\times d}\) tensor, it is possible to compute \(\mathbf{J}_{\mathbf{X}}\) analytically using autodifferentiation modules, however, such computation is highly inefficient, thus, we use numerical approximation.
Given a \(d\)-dimensional vector \(\mathbf{x}=[x_{1},...,x_{d}]\), the partial first derivative of \(F\) with respect to \(x_{i}\) is:
\[\frac{\partial F}{\partial x_{i}}=\lim_{\epsilon\to 0}\frac{F(\mathbf{x}+ \epsilon e_{i})-F(\mathbf{x})}{\epsilon}, \tag{15}\]
where \(e_{i}\) is a standard basis vector (i.e. only the \(i\)th component equals 1 and the rest are zero). This could be approximated numerically using a small \(\epsilon\). The Jacobian matrix \(\mathbf{J}_{\mathbf{x}}\) is simply \(\lfloor\frac{\partial F}{\partial x_{i}},...,\frac{\partial F}{\partial x_{d}}\rfloor\). To ensure the orthogonality of the Jacobian at \(\mathbf{X}\), we minimize the following loss:
\[\mathcal{L}_{2}=\frac{1}{m}\sum_{\mathbf{x}\in\mathbf{X}}|\mathbf{J}_{\mathbf{ x}}^{\top}\mathbf{J}_{\mathbf{x}}-\mathbf{I}_{d}| \tag{16}\]
This process could be computed efficiently in a few lines of code as indicated in algorithm 1.
### Training
The training process (algorithm 2) takes advantage of two sets of \(d\)-dimensional vectors (points), a reference point set \(\mathbf{R}=\{\mathbf{x}_{1},\mathbf{x}_{2},...,\mathbf{x}_{n}\}\), and target point set \(\mathbf{T}=\{\mathbf{y}_{1},\mathbf{y}_{2},...,\mathbf{y}_{m}\}\). First, we sample points from \(\mathbf{R}\) and \(\mathbf{T}\) and create two matrices \(\mathbf{X}\) and \(\mathbf{Y}\). We feed \(\mathbf{Y}\) to the model and obtain \(\hat{\mathbf{Y}}\). Under the GMM assumption, we compute the GMM posterior probability as a similarity matrix and estimate \(\mathcal{L}_{1}\) as the negative log-likelihood. For the Sinkhorn divergence approach, we compute equation (10). We use the SoftkNN operator to construct the kNN graph for both the input \(\mathbf{Y}\) and the output \(\hat{\mathbf{Y}}\) and compute \(\mathcal{L}_{2}\) as the mean squared error between the two. Finally, we use backpropagation by minimizing the loss \(\mathcal{L}=\mathcal{L}_{1}+\lambda\mathcal{L}_{2}\) until convergence.
### Stochastic Approximation of Orthogonal Jacobian Loss
Using finite difference to compute the Jacobian for low-dimensional point clouds is efficient, however, the computational cost increases linearly with the dimension of the data. Thus, an approximate estimate with the constant computational cost is introduced.
Given a vector-valued function \(F\), and a sample \(\mathbf{x}\), we would like to minimize the following:
\[\mathcal{L}_{\mathbf{J}}(F)=|\mathbf{J}^{\top}\mathbf{J}\circ(1-\mathbf{I})| _{2}=\sum_{i\neq j}\frac{\partial F_{i}}{\partial x_{j}}\frac{\partial F_{j}} {\partial x_{i}} \tag{17}\]
Following [20, 23], the Hutchinson's estimator of \(\mathcal{L}_{\mathbf{J}}(F)\) can be approximated as such:
\[\mathcal{L}_{\mathbf{J}}(F)=\texttt{Var}_{r}(r_{\epsilon}^{\top}(\mathbf{J}^{ \top}\mathbf{J})r_{\epsilon})=\texttt{Var}_{r}((\mathbf{J}r_{\epsilon})^{\top }(\mathbf{J}r_{\epsilon})) \tag{18}\]
where \(r_{\epsilon}\) denotes a scaled Rademacher vector (each entry is either \(-\epsilon\) or \(+\epsilon\) with equal probability) where \(\epsilon>0\) is a hyperparameter that controls the granularity of the first directional derivative estimate and \(\texttt{Var}_{r}\) is the variance. It
is worth noting that this does not guarantee orthonormality, only orthogonality. In practice, however, we find that such an estimator produces comparable results to the standard finite difference method and could be efficiently implemented in Pytorch as shown in algorithm 3.
```
Input: \(\mathbf{R}\), and \(\mathbf{T}\) pointsets, blurring factor \(\sigma\), step size \(\epsilon\), regularisation \(\lambda\), and batch size \(b\); Output: Trained model \(\triangleright\) Simple mini-batches of size \(b\) from \(\mathbf{R}\) and \(\mathbf{T}\) while\((\mathbf{X},\mathbf{Y})\in(\mathbf{R},\mathbf{T})\) until convergencedo \(\phi(\mathbf{Y})\leftarrow\mathbf{Y}+\delta(\mathbf{Y})\); ifloss=="sinkhorn"then \(\mathcal{L}_{1}=\mathtt{S}(\mathbf{X},\phi(\mathbf{Y});\sigma^{2})\); else \(\mathcal{L}_{1}=\mathtt{MMD}(\mathbf{X},\phi(\mathbf{Y});\sigma^{2})\); \(\mathbf{J}_{\mathbf{Y}}[i,:]=\frac{\delta(\mathbf{Y}+\epsilon\epsilon_{i})- \delta(\mathbf{Y})}{\epsilon}\); \(\mathcal{L}_{2}=\frac{1}{m}\sum_{\mathbf{x}\in\mathbf{X}}|\mathbf{J}_{\mathbf{ x}}^{\top}\mathbf{J}_{\mathbf{x}}-\mathbf{I}_{d}|\); \(\mathcal{L}=\mathcal{L}_{1}+\lambda\mathcal{L}_{2}\); \(\triangleright\) backpropogation step Minimize(\(\mathcal{L}\));
```
**Algorithm 2**Training kNN-Resnet
### Parameters Selection
The proposed model has three main hyperparameters, namely: \(\sigma\), \(\epsilon\), and \(\lambda\). In the case of Sinkhorn divergence, \(\sigma>0\) is the blur (interpolation) parameter between OT and MMD, with a default value of \(0.01\) for datasets that lie in the first quadrant of the unit hypercube (minmax normalized data). Decreasing \(\sigma\) has the effect of solving for an exact OT, which typically produces very accurate registration, however, this comes at a slower convergence cost. In the cases where it is more advantageous to use MMD, \(\sigma\) represents the standard deviation of the Gaussian kernel. \(\epsilon>0\) represents the finite difference step size and controls the radius of topology preservation around each point. It is worth noting that a large epsilon value that covers all data tends to produce globally isomorphic transformations. \(\lambda>0\) is simply a regularization parameter that prioritizes regularization over alignment and is typically less than \(0.01\). An additional hyperparameter \(k\) is introduced when using a stochastic approximation of Jacobian orthogonality for high-dimensional data. This hyperparameter determines the number of Rademacher vectors sampled to estimate the Jacobian orthogonality penalty. Generally, a large \(k\) tends to produce a more accurate estimation, however; in practice, \(k=5\) seems to be a sufficient number for the datasets we experimented with.
```
1defstochastic_orth_jacobian(G,z,epsilon=0.01):
2''
3InputG:FunctiontocomputetheJacobianPenaltyfor.
4Inputz:(batchsize,d)InputtoGthattheJacobianis
5computedw.r.t.
6Inputk:numberofdirectionstosample(default5)
7Inputepsilon:(default0.1)
8Output:mean(\(|\mathbf{J}_{X}^{T}\mathbf{J}_{X}-\mathbf{I}_{d}|\))
9'
10r=torch.randint(0,2,size=torch.Size((k,*z.size()),))
11#r:rademacherrandomvector
12r[r==0]=-1
13vs=epsilon*r
14diffs=[G(z+v)-Gzforvinvs]
15#std:stochasticfinitediffs
16sfd=torch.stack(diffs)/epsilon
17loss=torch.var(sfd,dim=0).max()
18returnloss
19
```
**Algorithm 3**PyTorch code for Hutchinson approximation for Jacobian off-diagonal elements at data points \(z\).
## 3 Experiments
In this section, we provide experimental results on several datasets, namely, Chui-Rangarajan synthesized dataset used in [31, 44, 45], and single-cell RNA data used in [5]. The Chui-Rangarajan synthesized dataset is comprised of two shapes; a fish shape, and a Chinese character shape. Each shape is subjected to 5 increasing levels of deformations using an RBF kernel, and each deformation contains 100 different samples. The samples are generated using different RBF coefficients which are sampled from a zero-mean normal distribution with standard deviation \(\sigma\), whereby increasing \(\sigma\) leads to generally larger deformation.
### Results on 2D Data
We use the root-mean-squared error (RMSE) between the transformed data \(\hat{y_{i}}\) and the ground truth \(y_{i}\) available from the Chui-Rangarajan synthesized dataset: \(error=\sqrt{\frac{1}{m}\sum_{i=0}^{m}{(\hat{y_{i}}-y_{i})^{2}}}\).
It is important to note that such ground-truth correspondence is absent during training time and is only available during test time. Figures 2 and 3 show the initial point set distributions and their corresponding aligned versions for the Chinese character and the fish examples respectively. We also report results for our kNN-Res, MM-Res[5], CPD [15], TRS-RPM [31], RPM-LNS [45], and GMMREG [32] over 5 deformation levels and 100 samples per level. Figures 4b and 4b show results for tested models for the Chinese character, and Fish datasets respectively. We notice that after a certain level of
non-rigid deformation, MM-Res is unable to converge. For our kNN-Res, we set \(\epsilon=.005,\lambda=10^{-5},\sigma=.001\) and number of hidden units = 50. We start with a relatively high learning rate (0.01) for ADAM [46] optimizer and use a reduce-on-plateau scheduler with a reduction factor of 0.7 and minimum learning rate of \(5\times 10^{-5}\). Qualitatively, the grid-warp representations from the second column in figures 2 and 3 indicate that our estimated transformations are, at least visually, "simple" and "coherent". Furthermore, to quantitatively assess neighborhood preservation we use the hamming loss \(\mathcal{L}_{H}\) to estimate the difference between the kNN graph before and after transformation:
\[\mathcal{L}_{H}=\sum_{i=0}^{m}\sum_{j=0}^{m}I(\hat{p}_{i,j}^{k}\neq p_{i,j}^{k})\]
where \(p_{i,j}^{k}\) is the \(i\),\(j\) element of the k-NN graph matrix before transformation, \(\hat{p}_{i,j}^{k}\) is the corresponding element after transformation, and \(I\) is the indicator function. Figures 5b and 5a show the difference in neighborhood preservation between MM-Res and our kNN-Res for the Chinese character, and Fish datasets respectively for three different levels of deformations.
Moreover, despite the additional topology regularization term, our kNN-Res generally incurred smaller alignment errors and was able to converge under large deformation levels.
### Results on High-Dimensional CyTOF Data
Cytometry by time of flight (CyTOF) provides the means for the quantification of multiple cellular components data, however, is susceptible to the so-called batch effect problem, where systematic non-biological variations during
Figure 2: The Chinese character deformation example: Top row represents original and deformed sets, Mid row represents the vector field, and Bottom row is the final alignment.
the measuring process result in a distributional shift of otherwise similar samples. This effect breaks the intra-comparability of samples which is a crucial component of downstream tasks such as disease diagnosis and typically requires the intervention of human experts to remove these batch effects. The CyTOF dataset used in our experiments was curated by the Yale New Haven Hospital. There are two patients, and two conditions were measured on two different days. All eight samples have 25 markers each representing a separate dimension ('CD45', 'CD19', 'CD127', 'CD4', 'CD8a', 'CD20', 'CD25', 'CD278', 'TNFa', 'Tim3', 'CD27', 'CD14', 'CCR7', 'CD28','CD152', 'FOXP3', 'CD45RO', 'INFg', 'CD223', 'GzB', 'CD3', 'CD274', 'HLADR', 'PD1', 'CD11b'), and a range of cells (points) between 1800 to 5000 cells per sample. The split is done such that samples collected on day 1 are the target, and samples collected on day 2 are the target, and samples collected on day 3 are the target, and samples collected on day 4 are the target, and samples collected on day 5 are the target, and samples collected on day 6 are the target, and samples collected on day 7 are the target, and samples collected on day 8 are the target, and samples collected on
Figure 5: The figures show Hamming loss for the following levels of deformations: (left) level 1, (mid) level 2, (right) level 3.
Figure 6: The blue and red dots represent 1st and 2nd principal components of reference (patient #2 on day 2) and the target samples (patient #2 on day 1) correspondingly.
the reference, resulting in four alignment experiments.
We follow the exact preprocessing steps described in [5]. To adjust the dynamic range of samples, a standard pre-preprocessing step of CyTOF data is applying a log transformation [47]. Additionally, CyTOF data typically contains a large number of zero values (40%) due to instrumental instability which are not considered biological signals. Thus, a denoising autoencoder (DAE) is used to remove these zero-values [48]. The Encoder of the DAE is comprised of two fully-connected layers with ReLU activation functions. The decoder (output) is a single linear layer, with no activation function. All layers of the DAE have the same number of neurons as the dimensionality of the data. Next, each cell is multiplied by an independent Bernoulli random vector with probability =.8, and the DAE is trained to reconstruct the original cell using an MSE loss. Furthermore, the DAE is optimized via RMSprop and weight decay regularization. The zero values in both reference and target are then removed using the trained DAE. Finally, each feature in both target and reference samples is independently standardized to have a zero-mean and unit variance. For our kNN-Res, we set \(\epsilon=0.05,\lambda=0.1,\sigma=0.04\), \(k=5\) for Hutchinson's estimator, and the number of hidden units to 50. We start with a relatively high learning rate (0.01) for the ADAM optimizer and use a reduce-on-plateau scheduler with a reduction factor of.7, and a minimum learning rate of \(5\times 10^{-5}\). Figure 6 shows the first two principal components of data before and after alignment using two kNN-Res models with different lambdas. Although the two samples appear less aligned when using a large \(\lambda\), this comes with the benefit of preserving the topology of the data as shown by the learned transformation in figure 7 where points (cells) are moved in a coherent way.
This becomes more clearer when looking at the marginals in figure 13 in the appendix. In this experiment, we trained five models with five different
Figure 7: Point set transformation(alignment) of patient #2 sample on day 1 and day 2, shown in space of 1st and 2nd principal components.
lambdas ranging from 0 to 1. It is clear that having a small \(\lambda\) favors alignment over faithfulness to the original distribution, however, increasing \(\lambda\) preserves the shape of the original data after transformation, which is desirable in biological settings. For results of other experiments see Appendix.
## 4 Discussion
### Implications
Point-set registration methods are typically used for problems in computer vision to align point clouds produced by either stereo vision or by Light Detection and Ranging devices (e.g. Velodyne scanner) for instance to stitch scenes and align objects. These datasets are usually of 2 or 3 dimensions and hence the methods had limited exposure to high-dimensional datasets. Biological data, on the other hand, is usually of high dimension and hence methods from point-set registration do not directly translate to biological data. The proposed method in this study was tested on a 25-dimensional CyTOF dataset. However, in flow and mass cytometry, data could easily go beyond 50 dimensions (markers). For instance, methods that combine protein marker detection with unbiased transcriptome profiling of single cells provide an even higher number of markers. These methods show that multimodal data analysis can achieve a more detailed characterization of cellular phenotypes than transcriptome measurements alone [49, 50] and hence recently gained significant traction. Unfortunately, these methods require more sophisticated batch normalization algorithms, since manual gating and normalization using marginal distributions become infeasible. It is worth mentioning that even though the experts are making sure that the marginal distributions are aligned, there is still no guarantee that the samples are aligned in the higher dimensional space. Moreover, the alignment might result in such nonlinear and non-smooth transformations that break biological relationships or introduce non-existing biological variabilities. The proposed method mitigates these issues and guarantees smooth transformations.
### Limitations
It is clear from the last step of the proof that the orthogonal Jacobian is too strong a condition for preserving the kNN graph:
\[(\mathbf{v}-\mathbf{u})^{\top}\mathbf{J}_{\mathbf{u}}^{\top}\mathbf{J}_{ \mathbf{u}}(\mathbf{v}-\mathbf{u})=(\mathbf{v}-\mathbf{u})^{\top}(\mathbf{v}- \mathbf{u}) \tag{19}\]
The objective is satisfied by preserving inequality and not equality. In other words, it is only necessary and sufficient for \(\mathbf{J}\) to preserve the kNN graph if the following holds:
\[\mathbf{u}^{\top}\mathbf{u}\leq\mathbf{v}^{\top}\mathbf{v}\rightarrow\mathbf{ u}^{\top}\mathbf{J}^{\top}\mathbf{J}\mathbf{u}\leq\mathbf{v}^{\top}\mathbf{J} ^{\top}\mathbf{J}\mathbf{v} \tag{20}\]
or
\[\langle\mathbf{u},\mathbf{u}\rangle\leq\langle\mathbf{v},\mathbf{v}\rangle \rightarrow\langle\mathbf{J}\mathbf{u},\mathbf{J}\mathbf{u}\rangle\leq \langle\mathbf{J}\mathbf{v},\mathbf{J}\mathbf{v}\rangle \tag{21}\]
Having strict equality puts a limitation on the kind of transformations the model is capable of learning. Furthermore, even if the deformation could theoretically be expressed, such a penalty makes convergence unnecessarily slower. On the empirical side, we only have a limited number of experiments to test the proposed method. More experimentation and ablation are required to better understand the limits of our current approach and to learn how it fairs on a wider selection of real-world data such as RNA-Seq.
### Future Work
An important future direction is incorporating local or partial matching using modified alignment losses such as Gromov-Wasserstein distance. This should lead to a much more robust solution than global matching, especially in the case of outliers and missing objects. We also posit that solving point set registration under topological constraints such as preserving the kNN graph is naturally extendable to dimensionality reduction.
## 5 Conclusion
This paper presents a simple, scalable framework for point cloud registration. At its core, it consists of three components, namely (a) residual neural network with identity blocks as a parametrized displacement field, (b) Jacobian penalty as a topology-preserving loss, and (c) Sinkhorn Divergence as a sample-based, geometry-aware statistical distance. Additionally, by incorporating Hutchinson's estimator for the Jacobian loss, we show that our model is easily extensible to high dimensions with constant complexity. Furthermore, we offer both qualitative and quantitative analysis for synthetic and CyTOF datasets showing the flexibility and applicability of our model in multiple domains.
| この論文では、点セットの登録に適用する残差ネットワークに基づく方法を提案します。この方法では、対象点セットの拓撲構造を保ちます。相対的にcoherent point drift (CPD) と同様、登録 (アライメント) 問題は、標的分布から抽出したデータポイントが正規化された位相ベクトル場に沿って移動したものと見なされます。CPD のcoherence 制約は、局所的な動き相関性で述べられますが、提案された正規化項は、局所的なトポロジーを保つためのグローバルな滑らかさ制約を代理として使用しています。これは、複数のオブジェクトや複雑な姿勢登録の場合、CPD の柔軟性が低下する原因となっています。この問題に対処するため、ヤコビアンベースのコスト関数と幾何学的Awareの統計的距離が提案されました。後者は、標的と参照との間のずれを測定するのに使用 |
2305.19628 | On the origin of the evolution of the halo occupation distribution | We use the TNG300 magneto-hydrodynamic simulation and mock catalogues built
using subhalo abundance matching (SHAM) to study the origin of the redshift
evolution of the halo occupation distribution (HOD). We analyse stellar-mass
selected galaxy samples with fixed number densities, spanning the redshift
range $0 \le z \le 3$. We measure their halo occupation functions and fit the
HOD parameters to study their evolution over cosmic time. The TNG300 galaxy
population strongly depends on the baryonic physics implemented in the
simulation. In contrast, the galaxy population predicted by a basic SHAM model
without scatter is a direct result of the cosmology of the dark matter
simulation. We find that the HOD evolution is similar for both models and is
consistent with a previous study of the HOD evolution in semi-analytical
models. Specifically, this is the case for the ratio between the characteristic
halo masses for hosting central and satellite galaxies. The only HOD parameter
whose evolution varies across models is $\sigma_{\rm logM}$, which contains
information about the stellar mass-halo mass relation of the galaxies and does
not strongly impact galaxy clustering. We also demonstrate that the dependence
on the specific values of the cosmological parameters is small. We conclude
that the cosmology of the galaxy sample, i.e. the cosmological hierarchical
growth of structure, and not the baryonic physics prescriptions, governs the
evolution of the HOD for stellar mass-selected samples. These results have
important implications for populating simulated lightcones with galaxies and
can facilitate the interpretation of clustering data at different redshifts. | Sergio Contreras, Idit Zehavi | 2023-05-31T07:54:14 | http://arxiv.org/abs/2305.19628v2 | # On the origin of the evolution of the halo occupation distribution
###### Abstract
We use the TNG300 magneto-hydrodynamic simulation and mock catalogues built using subhalo abundance matching (SHAM) to study the origin of the redshift evolution of the halo occupation distribution (HOD). We analyse stellar-mass selected galaxy samples with fixed number densities, spanning the redshift range \(0\leq z\leq 3\). We measure their halo occupation functions and fit the HOD parameters to study their evolution over cosmic time. The TNG300 galaxy population strongly depends on the baryonic physics implemented in the simulation. In contrast, the galaxy population predicted by a basic SHAM model without scatter is a direct result of the cosmology of the dark matter simulation. We find that the HOD evolution is similar for both models and is consistent with a previous study of the HOD evolution in semi-analytical models. Specifically, this is the case for the ratio between the characteristic halo masses for hosting central and satellite galaxies. The only HOD parameter whose evolution varies across models is \(\sigma_{\rm logM}\), which contains information about the stellar mass-halo mass relation of the galaxies and does not strongly impact galaxy clustering. We also demonstrate that the dependence on the specific values of the cosmological parameters is small. We conclude that the cosmology of the galaxy sample, i.e. the cosmological hierarchical growth of structure, and not the baryonic physics prescriptions, governs the evolution of the HOD for stellar mass-selected samples. These results have important implications for populating simulated lightcones with galaxies and can facilitate the interpretation of clustering data at different redshifts.
keywords: galaxies: evolution - galaxies: formation - galaxies: haloes - galaxies: statistics - cosmology: theory - large-scale structure of universe
## 1 Introduction
In the standard picture of hierarchical structure formation, galaxies reside in dark matter haloes. The formation and evolution of the haloes is dominated by gravity, with the haloes growing by accretion and mergers. The formation of the galaxies and their relation to the dark matter haloes is more complex and depends on the detailed physical processes, leading to the varied observed galaxy properties. As the haloes merge and evolve, the haloes will often host more than one galaxy (since galaxy merger is a slower process). The evolution of the galaxies may thus be impacted by both the baryonic physics and by the merger history of their host haloes.
One of the most useful and popular ways to characterise the dark matter halo-galaxy connection is by measuring the average number of galaxies that populate haloes as a function of halo mass, which provides the basis for the halo occupation distribution (HOD) framework (Jing et al., 1998; Benson et al., 2000; Peacock and Smith, 2000; Berlind et al., 2003; Zheng et al., 2005, 2007; Guo et al., 2015). The HOD formalism has been widely used to interpret clustering data (e.g., Zehavi et al., 2011; Guo et al., 2015), to characterise different galaxy populations (Contreras et al., 2013; Yuan et al., 2022), to create mock galaxy catalogues (e.g., Grieb et al., 2016), to examine galaxy assembly bias effects (e.g., Zehavi et al., 2018; Salcedo et al., 2022) or even constrain cosmological parameters (e.g., Cacciato et al., 2013; More et al., 2015; Zhai et al., 2019; Miyatake et al., 2022; Yuan et al., 2022; Zhai et al., 2023).
The HOD model's strengths as a technique for creating mock galaxy catalogues include its ability to reproduce realistic galaxy clustering, its flexibility, and computational efficiency. Populating a dark matter simulation with galaxies, over large cosmological volumes, takes mere seconds and requires only the position and mass of dark matter haloes. Some HOD improvements, such as velocity bias (Guo et al., 2015) or assembly bias (Hearin et al., 2016; Xu et al., 2021), may also necessitate the simulation's dark matter particles or additional halo properties (see Yuan et al., 2022; Yuan et al., 2022) for the latest developments on HOD modelling). These requirements are significantly smaller than those of other techniques, such as subhalo abundance matching (SHAM, Vale and Ostriker, 2006; Conroy et al., 2006; Guo and White, 2014; Contreras et al., 2015; Kulier and Ostriker, 2015; Chaves-Montero et al., 2016; Lehmann et al., 2017; Contreras et al., 2021, 2021; Favole et al., 2022; Contreras et al., 2023, 20) or semi-analytical models of galaxy formation (SAMs, e.g., Kauffmann et al., 1993; Cole et al., 1994; Bower et al., 2006; Lagos et al., 2008; Benson, 2010, 2012; Jiang et al., 2014; Croton et al., 2016; Lagos et al., 2018; Stevens et al., 2018; Henriques et al., 2020), which require higher resolution simulations, subhalo property computation, and, in most cases, halo merger trees. In turn, these requirements are smaller than those of
hydrodynamic simulations, which are arguably the most advanced way we have today to model galaxies on cosmological volumes.
In hydrodynamic simulations (such as EAGLE, Schaye et al. 2015; Illustris,Vogelsberger et al. 2014b; Magueticum, Dolag et al. 2016; HorizonAGN, Dubois et al. 2014; Simba, Dave et al. 2019; IllustrisTNG, Nelson et al. 2018; Pillepich et al. 2019 and MillenniumTNG, Pakmor et al. 2022), dark matter particles are modelled alongside baryonic particles/cells. These simulations are then able to directly reproduce the interaction between dark matter and baryons and provide unique opportunities to study, in detail, the evolution of galaxies in a cosmological context. The downside of these simulations is their high computational cost, which can be an order of magnitude larger than dark matter-only simulations. Hence, hydrodynamic simulations and SAMs are often used to enhance the accuracy of other, more practical, approaches for modelling galaxies, such as HODs and SHAMs.
Our work follows that of Contreras et al. (2017; C17 hereafter), where we studied the evolution of the HODs of stellar mass-selected samples from two different semi-analytic models generated from the same dark matter simulation. In the SAMs, the haloes from the N-body simulations are populated with galaxies using analytical prescriptions for the baryonic processes. Following the dark matter merger trees, galaxies merge and evolve as new stars form and previous generations of stars change. The evolution of the HOD is characterised by fitting a parametric form to the HODs at different redshifts and studying the evolution of the fitting parameters. C17 present a simple evolution model for each of the HOD parameters. This evolution can be used to populate lightcones constructed from N-Body simulations (e.g., Smith et al. 2017, 2022) or for modelling clustering data at different redshifts. Although the HODs describing the two SAMs exhibit some differences, the evolution of HOD parameters followed a similar pattern. These findings may suggest that the evolution of the HOD is governed by cosmology and not galaxy formation physics.
In this paper, we explore the evolution of the HOD of stellar mass-selected samples for two distinct galaxy population models: a state-of-the-art hydrodynamical simulation, the TNG300, whose galaxy population strongly depends on the baryonic processes of the model, and a basic SHAM model without scatter. In the SHAM model, each subhalo in the simulation is assigned a stellar mass by assuming a monotonic relation to a subhalo property (\(V_{\rm peak}\) in our case), such that the subhalo with the highest value of \(V_{\rm peak}\) has the largest stellar mass and so on (see SS 2.2 for more details). Since we construct our galaxy samples based on a fixed number density, the galaxy samples produced by the SHAM model do not depend on any galaxy formation physics, but rather on the simulation's cosmology. We find that the HODs evolve nearly identically in both models, indicating that the evolution is determined by the cosmological hierarchical accretion picture and not by the galaxy formation physics. Having a universal way in which the HOD parameters evolve, independent of the galaxy formation model assumed, justifies some of the ansatzes assumed today when constructing simulated lightcone galaxy catalogues.
This paper is organised as follows. The simulations and galaxy population models used in this study are described in SS 2. The evolution of HOD in each of these models is depicted in SS 3 and subsequently analysed SS 4. We conclude in SS 5. Appendix A presents results for the evolution of the HOD in the EAGLE hydrodynamical simulation. Appendix B examines the dependence on the values of the cosmological parameters. Unless otherwise stated, the standard units in this paper are \(h^{-1}{\rm M}_{\odot}\) for masses, \(h^{-1}{\rm Mpc}\) for distances, \({\rm km/s}\) for the velocities, and all logarithm values are in base 10.
## 2 Models of Galaxy Clustering
In this section, we describe the galaxy population models employed in the construction and characterization of our galaxy samples. In SS 2.1, we introduce the TNG300 cosmological magneto-hydrodynamic simulation, as well as its dark matter-only counterpart, TNG300-Dark. In SS 2.2, we present the subhalo abundance matching method employed to populate the TNG300-Dark. In SS 2.3, we describe briefly the halo occupation distribution framework, used to characterise the different galaxy samples. Finally, in SS 2.4, we specify how we select and compare the galaxies from TNG300 and the SHAM mock.
### The TNG300
In this work, we use galaxy samples from the Illustris-TNG300 simulation (hereafter TNG300). This simulation is part of "The Next Generation" Illustris Simulation suite of magneto-hydrodynamic cosmological simulations (IllustrisTNG; Nelson et al. 2018; Springel et al. 2018a; Marinacci et al. 2018; Pillepich et al. 2018b; Naiman et al. 2018), the successor of the original Illustris simulation (Vogelsberger et al. 2014b,a; Genel et al. 2014; Sijacki et al. 2015). TNG300 is one of the largest publicly available high-resolution hydrodynamic simulations1. The simulated volume is a periodic box of \(205\,h^{-1}{\rm Mpc}\) (\(\sim 300\) Mpc) aside. The number of gas cells and dark matter particles is \(2500^{3}\) each, implying a baryonic mass resolution of \(7.44\times 10^{6}\,h^{-1}{\rm M}_{\odot}\) and a dark matter particle mass of \(3.98\times 10^{7}\,h^{-1}{\rm M}_{\odot}\). The cosmological parameters assumed in the simulations, \(\Omega_{\rm M}=0.3089\), \(\Omega_{\rm b}=0.0486\), \(\sigma_{8}=0.8159\), \(n_{\rm s}=0.9667\) and \(h=0.6774\), are consistent with recent Planck values (Planck Collaboration et al. 2016).
Footnote 1: [https://www.tmg-project.org/](https://www.tmg-project.org/)
TNG300 was run using the AREPO code (Springel 2010) and features a number of enhancements over its predecessor, the Illustris simulation, including: an updated kinetic AGN feedback model for low accretion rates (Weinberger et al. 2017); an improved parameterization of galactic winds (Pillepich et al. 2018a); and inclusion of magnetic fields based on ideal magneto-hydrodynamics (Pakmor et al. 2011; Pakmor & Springel 2013; Pakmor et al. 2014). The free parameters of the model were calibrated to ensure that the simulation agrees with a number of observations: (i) the stellar mass function, (ii) the stellar-to-halo mass relation, (iii) the total gas mass contained within the virial radius (\(r_{500}\)) of massive groups, (iv) the stellar mass - stellar size relation and the black hole-galaxy mass relation at \(z=0\), and (v) the overall shape of the cosmic star formation rate density up to \(z\sim 10\). The TNG simulations also successfully reproduce many other observables not directly employed in the calibration process (e.g., Springel et al. 2018a; Pillepich et al. 2018b; Springel et al. 2018b; Vogelsberger et al. 2020).
To identify (sub)haloes/galaxies, a friends-of-friends group finder (FOF; Davis et al. 1985) is first used to identify the haloes (with linking length 0.2), within which gravitationally bound substructures are then located and hierarchically characterised using the SUBFIND algorithm (Springel et al. 2001). The SUBFIND catalogue contains both central and satellite subhaloes, with the position of the centrals coinciding with the FOF centres (defined as the minimum of the gravitational potential).
We use as well the dark matter-only version of TNG300, which we refer to as TNG300-Dark. This simulation has the same initial conditions, volume, cosmology, number of outputs and number of dark matter particles as its hydrodynamic counterpart, but with a mass
particle of \(4.27\times 10^{7}\)\(h^{-1}\)M\({}_{\odot}\). We also utilize the merger trees of the simulation, run with the SUBLINK algorithm (Rodriguez-Gomez et al., 2015), to compute \(V_{\rm peak}\) for the subhaloes, needed for constructing the subhalo abundance matching catalogue.
### The subhalo abundance matching
SubHalo Abundance Matching (SHAM; Vale and Ostriker, 2006; Conroy et al., 2006) is an empirical method for populating the subhaloes of a \(N\)-body simulation with galaxies. In its most fundamental form, SHAM assumes a monotonic mapping between the (sub)halo mass of the central galaxies and their stellar mass or luminosity. Recent implementations of SHAM incorporate scatter and satellite galaxies by utilizing subhalo properties at infall or the maximum values throughout their assembly history. These modifications are necessary in order to obtain realistic clustering predictions.
One of the main advantages of SHAM approach is the computational efficiency and simplicity. In contrast to HOD models, which have between five to ten free parameters, standard SHAM models use a single free parameter, the scatter between the subhalo property used and the stellar mass, in the majority of their implementations. Additionally, SHAM predicts galaxy clustering in rough accordance with hydrodynamical simulations and reproduces some, but not all, of its galaxy assembly bias signal (Chaves-Montero et al., 2016; see also Contreras et al., 2021, 2022). At the same time, due to the necessity of identifying the subhalos, the resolution of the N-body simulation required to run a SHAM is greater than that needed for an HOD, which only requires a halo catalogue. Furthermore, SHAM models typically require some subhalo properties that are not always provided by the N-Body codes and need to be computed from the subhalo merger trees, such as the peak halo mass (\(M_{\rm peak}\)), the peak maximum circular velocity (\(V_{\rm peak}\)) or the infall subhalo mass (\(M_{\rm infall}\)).
In this paper, we create SHAM mocks with the subhalo property \(V_{\rm peak}\) using the TNG300-Dark simulation. \(V_{\rm peak}\) is defined as the peak value of \(V_{\rm max}\) over the subhalo's evolution, where \(V_{\rm max}\) is the maximum circular velocity (\(V_{\rm max}\equiv\max(\sqrt{GM(<r)/r})\)). \(V_{\rm peak}\) has been widely used as the SHAM primary property of our SHAM and has been shown to provide a tighter relation to the stellar mass of galaxies than other properties (see also the discussion in Campbell et al., 2018). For simplicity, we do not introduce scatter between the subhalo property and the stellar mass. We use the stellar mass function of the TNG300 to assign stellar masses to the subhaloes. As we select galaxies based on number density, and use a SHAM without scatter, the choice of the stellar mass function has no impact on the results.
We chose a SHAM without scatter created from the dark matter-only simulation since such a model is not influenced by galaxy formation physics and results purely from the input cosmology of the N-body simulation. This is in direct contrast to the case of an hydrodynamic simulation, where baryons are carefully modelled to create realistic galaxy population samples. For completeness, we also tested a SHAM model with scatter, which is in the middle of these two extremes, where the scatter is added to better mimic the properties of TNG300. However, as the results from this model were almost identical to the other two models, we chose not to include them here for the sake of clarity of presentation.
### The halo occupation distribution
#### 2.3.1 HOD modelling
The HOD formalism describes the relationship between galaxies and haloes in terms of the probability distribution that a halo of virial mass \(M_{\rm h}\) contains \(N\) galaxies of a particular type, as well as the spatial and velocity distributions of galaxies within haloes. The fundamental component is the halo occupation function, \(\langle N(M_{\rm h})\rangle\), which represents the mean number of galaxies as a function of halo mass. This approach has the advantage of not requiring assumptions about the physical processes that drive galaxy formation and can be empirically derived from observations.
Standard applications typically assume a cosmology and a parameterized form for the halo occupation functions, which are motivated by the predictions of SAMs and hydrodynamics simulations (e.g., Zheng et al., 2005). The HOD parameters are then constrained using measurements of galaxy clustering from large surveys. This method essentially converts galaxy clustering measurements into a physical relation between the galaxies and dark matter halos, paving the way for comprehensive tests of galaxy formation models.
An important application of this approach is the generation of simulated galaxy catalogues by populating dark matter haloes in N-body simulations with galaxies that reproduce the desired clustering. This method has gained popularity due to its low computational cost and high performance (e.g., Manera et al., 2015; Zheng and Guo, 2016; Yuan et al., 2022). The halo occupation function is typically provided at a specific redshift or over a narrow redshift interval. To generate a mock galaxy catalogue over a wide range of redshifts (e.g., lightcone), an HOD model with a dependence on redshift may be needed. In C17, we presented a novel approach to describe the HOD as a function of redshift. There, we studied the HOD evolution for stellar-mass selected galaxies since \(z=3\), for two different SAMs. Even though the HODs of those two models were different, the evolution of their HODs was similar. A simplified version of our model was later used by Smith et al. (2017, 2022) to populate simulated lightcones built from N-body simulations to create more realistic galaxy catalogues.
#### 2.3.2 HOD parameterization
To parameterize the HOD, it is useful to distinguish between central galaxies, i.e. the primary galaxy at the centre of the halo, and the additional satellite galaxies, and to consider the contributions of each separately (Kravtsov et al., 2004; Zheng et al., 2005). By definition, a dark matter halo cannot contain more than one central galaxy, but there is no upper limit on the number of satellites. Furthermore, for samples defined by properties that scale with halo mass, such as stellar mass or luminosity, a halo is typically populated first by a central galaxy, followed by additional satellite galaxy (although there can be haloes populated by only satellite galaxies in a given sample; see e.g., Jimenez et al., 2019).
The traditional shape for the HOD is a smooth transition from zero to one galaxy for the central galaxies and a transition from zero to a power law for the satellites. The 5-parameter model introduced by Zheng et al. (2005) (see also Zheng et al., 2007; Zehavi et al., 2011) is one of the most widely used parameterizations as it describes well samples of galaxies brighter than a given luminosity or more massive than a given stellar mass. We use this form of the halo occupation function in this work to describe the TNG300 and the SHAM mocks.
The mean occupation function of the central galaxies is described as a step-like function with a softened cutoff profile, to account for the dispersion between the stellar mass (or luminosity) and halo mass. It
has the following form:
\[\langle N_{\rm cen}(M_{\rm h})\rangle=\frac{1}{2}\left[1+{\rm erf}\left(\frac{ \log M_{\rm h}-\log M_{\rm min}}{\sigma_{\log M}}\right)\right]\,, \tag{1}\]
where \(M_{\rm h}\) is the host halo mass and \({\rm erf}(x)\) is the error function,
\[{\rm erf}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-t^{2}}{\rm d}t\,. \tag{2}\]
The parameter \(M_{\rm min}\) characterizes the minimum mass for hosting a central galaxy above a given threshold, or more specifically, the halo mass at which half the haloes are occupied by a central galaxy (i.e., \(\langle N_{\rm cen}(M_{\rm min})\rangle=0.5\)). The second parameter \(\sigma_{\rm logM}\) represents the "sharpness" (width) of the transition from zero to one galaxy per halo. The value of \(\sigma_{\rm logM}\) indicates the amount of scatter between stellar mass and halo mass.
The occupation function for satellite galaxies is modelled as:
\[\langle N_{\rm sat}(M_{\rm h})\rangle=\left(\frac{M_{\rm h}-M_{\rm cut}}{M_{1} ^{*}}\right)^{\alpha}\,, \tag{3}\]
with \(M_{\rm h}>M_{\rm cut}\), representing a power-law shape with a smooth cutoff at low halo masses. In this equation, \(\alpha\) is the slope of the power law, which is typically close to one, \(M_{\rm cut}\) is the satellite cutoff mass scale (i.e., the minimum mass of haloes hosting satellites), and \(M_{1}^{*}\) is the normalisation of the power law. A related parameter, \(M_{1}=M_{\rm cut}+M_{1}^{*}\), is often used to characterise the halo mass scale for hosting satellite galaxies above a given threshold. Specifically, it measures the average halo mass for hosting one satellite galaxy(i.e., \(\langle N_{\rm sat}(M_{1})\rangle=1\)). In what follows, for clarity, we show results for \(M_{1}\). Nonetheless \(M_{1}^{*}>>M_{\rm cut}\), so \(M_{1}\sim M_{1}^{*}\). We have verified that all trends identified for \(M_{1}\) also hold for \(M_{1}^{*}\).
The occupation functions for centrals and satellites can be fitted independently with this definition, with the total number of galaxies given by their sum:
\[\langle N_{\rm gal}(M_{\rm h})\rangle=\langle N_{\rm cen}(M_{\rm h})\rangle+ \langle N_{\rm sat}(M_{\rm h})\rangle. \tag{4}\]
Figure 1 depicts a schematic representation of the shape of the HOD illustrating which features are sensitive to these five parameters.
We note that often a variant of these expressions is used, such that the cutoff profile for the central galaxies occupation is applied also to the satellite occupation, assuming (using our notation) that the total number of galaxies is given by \(\langle N_{\rm cen}\rangle(1+\langle N_{\rm sat}\rangle)\). In that case, the fitting of the HOD cannot be done separately for centrals and satellites (because of the \(\langle N_{\rm cen}\rangle\langle N_{\rm sat}\rangle\) term). Hence, assuming this form results in a more complex procedure to determine the best-fitting values of the parameters and ultimately gives poorer constraints, particularly for \(M_{\rm cut}\). Furthermore, Jimenez et al. (2019) show that satellite galaxies from a stellar mass-selected sample sometimes populate haloes whose central galaxies are not part of that sample. Assuming this form doesn't allow to account for such cases, and thus might bias the results. For these reasons, we choose to proceed with the formalism as detailed above in Eq. 2-4. We caution that one must be careful when comparing results obtained with different notations.
### Galaxy samples
We study stellar-mass selected galaxy samples corresponding to four different number densities and seven different redshifts. To construct these samples, we choose, at each epoch, the most massive galaxies corresponding to the following number densities: 0.0316, 0.01,
Figure 1: A schematic depiction of the standard 5-parameter form of the halo occupation function, which gives the mean number of galaxies per halo as a function of the host halo mass (based on Fig. 1 of C17). The solid blue line represents the occupation function for all galaxies, which can be further separated into the contributions of central galaxies (red dashed line) and satellite galaxies (red dotted line). As a reference we show an abundance of \(\langle N_{\rm gal}(M_{h})\rangle=1\) as a horizontal grey dotted line; this will be shown in all subsequent HOD plots. The halo occupation function of central galaxies exhibits a gradual transition from zero to one galaxy per halo, which is well described by two parameters: \(M_{\rm min}\), the halo mass at which half of the haloes are populated by a central galaxy, and \(\sigma_{\rm logM}\), which characterises the smoothness of this transition. The satellites occupation function is described by a transition from zero galaxies to a power law with three parameters: \(M_{\rm cut}\), the minimum halo mass for hosting satellites, \(M_{1}\), the mass at which there is, on average, one satellite galaxy per halo, and the power-law slope \(\alpha\). See text for more details and the explicit equations.
Figure 2: Cumulative stellar mass functions for galaxies in the TNG300 simulation. The coloured lines represent different redshifts as labelled. The dashed horizontal lines denote the number density samples adopted in this work (the values are marked at the upper right of each line). The galaxies selected for a given number density and redshift are those to the right of the intersection with their associated dashed line.
0.00316 and 0.001 \(h^{3}\)Mpc\({}^{-3}\). At \(z=0\), these correspond to stellar mass thresholds of \(6.05\times 10^{8}\), \(6.47\times 10^{9}\), \(2.19\times 10^{10}\) and \(4.54\times 10^{10}\)\(h^{-1}\)M\({}_{\odot}\), respectively. The stellar mass of a galaxy in the TNG300 is defined as the sum of the masses of all stellar particles within twice the half-mass radius. We remind the reader that, since we are using the same stellar mass function for both the hydrodynamical and SHAM models, they will share the same cut for each number density.
Fig. 2 shows the cumulative stellar mass functions for the 7 redshifts used in this work, \(z=0,\ 0.5,\ 1.0,\ 1.5,\ 2.0,\ 2.5\ \&\ 3.0\). The horizontal dashed lines correspond to the four different number densities. The galaxies included in each sample are the ones to the right (more massive) of the intersection of these horizontal lines and the cumulative stellar mass functions. We chose these cuts to facilitate the comparison with C17, where we analyzed galaxies from semi-analytical models selected in a similar fashion.
In order to facilitate the comparison of the HOD evolution for the different models, it is also necessary to correct the halo mass function of TNG300. Due to baryonic effects, The TNG300's halo mass function is not identical to that of TNG300-Dark, on which the SHAM mock was run. The cumulative halo mass functions for these two simulations are shown in Fig. 3, for the different redshifts. To facilitate the comparison, we recalibrate the halo mass function of TNG300 to match that of the dark matter-only simulation. This is done by measuring the difference in halo mass between the simulations for each cumulative abundance, and then applying this difference to the TNG300's haloes. This step is particularly helpful for interpreting the evolution of the HOD parameters that represent masses (such as \(M_{\rm min}\), \(M_{1}\), and \(M_{\rm cut}\)), given that the differences between the halo mass functions are not constant with redshift. All TNG300 results presented in this paper incorporate this correction.
Figure 4: The HODs in the TNG300 simulation (top panel) and for a mock galaxy sample run with a SHAM model (middle panel), for stellar-mass selected galaxies corresponding to a number density of 0.0316 h\({}^{3}\)Mpc\({}^{-3}\). The different lines correspond to different redshifts spanning z = 0 to 3, as labelled. To facilitate the comparison between the models, we show in the bottom panel the HODs for both the TNG300 and the SHAM mock at z = 0, 1, 2 and 3.
Figure 3: Cumulative halo mass functions for TNG300 (solid lines) and TNG300-Dark (dashed lines). The coloured lines correspond to different redshifts as labelled. Halo mass is defined as the total mass enclosed in a sphere whose mean density is 200 times the critical density of the Universe (also known as M\({}_{\rm 200,crit}\)). To compare the two samples, we calibrate the halo masses of the TNG300 by matching the halo mass functions (see § 2.4 for more details).
## 3 The evolution of the HOD
We compute the halo occupation functions in the TNG300 simulation and for the SHAM model for the four number density samples previously stated and at the seven redshift outputs between z=0 and z=3. Please note that we are here directly measuring the average halo occupation as a function of halo mass, as opposed to inferring it from the correlation function, as is typical in galaxy clustering studies.
In Fig. 4, we show the HODs for the galaxy samples with a number density of \(n=0.0316\)\(h^{3}\)Mpc\({}^{-3}\) at the seven redshift outputs between \(z=0\) and \(z=3\). The top and middle panels show the HODs for the TNG300 and the SHAM model, respectively, while the bottom panel compares the HODs of both models for a smaller set of redshifts. The evolution of the HOD in both models appears similar. The overall trend is a shift of the halo occupation function toward larger halo masses with decreasing redshift (increase in time). In more detail, for both models, the threshold for hosting central galaxies (at the lower occupation region at low halo masses), increases monotonically with time from \(z=3\) to \(z\sim 1\). From that point until \(z=0\), the shift in halo mass diminishes with the threshold for starting to host galaxies remaining similar. In contrast, the satellite occupation appears to continuously shift with decreasing redshift. These results are consistent with the findings of C17 in semi-analytic galaxy formation models.
To gain a better understanding of the evolution of HODs, we fit the halo occupation functions using the 5-parameter functional form described in SS 2.3.2 and analyse the evolution of those parameters. We fit the central and satellite occupations independently, and assume a constant error per bin for the fitting. In previous works (e.g., Contreras et al., 2013, C17), we tested using different weights on each of the points of the HOD (such as weighting by the abundance of haloes or the effective bias in each mass bin), finding that this tends to overemphasize a specific part of the HOD, resulting in significant discrepancies at high halo masses. We estimate the error on the HOD fitting parameters by normalizing the constant bin errors such that the best fit has a reduced chi-square of one (i.e., \(\chi^{2}_{\rm min}\)/d.o.f = 1).
Fig. 5 presents the values for the best fitting parameter of the HOD, \(M_{\rm min}\), \(M_{1}\), \(\sigma_{\rm logM}\), \(\alpha\) and \(M_{\rm cut}\), as a function of redshift. The solid lines show the values for the TNG300 while the SHAM predictions are shown as dashed lines. The different colours represent different number densities as labelled. We do not show the prediction for the lowest number density for the satellite HOD parameters, since the number of haloes with satellite galaxies at high redshift was too low to do a proper fit.
While there are some differences between the values of the parameters for the TNG300 and the SHAM at different redshifts, overall there is a good agreement between the models for all but one parameter, \(\sigma_{\rm logM}\). While the SHAM technique is known for being able to reproduce the galaxy clustering (and therefore, the HOD) of complex galaxy samples as a hydrodynamic simulation (e.g., Chaves-Montero et al., 2016; Contreras et al., 2021c), it is surprising that even in its most basic form (without scatter), the model is in good agreement with the TNG300 predictions. We remind the reader that the SHAM model does not incorporate any baryonic processes and that the properties of the resulting galaxy population depend solely on the gravitational growth of the subhaloes in the simulation. This in turn depends on the cosmological model corresponding to the dark matter-only simulation used.
These four parameters evolve as follows:
* The characteristic halo mass for hosting a central galaxy, \(M_{\rm min}\)-increases linearly (in logarithmic scale) from \(z=3\) to \(z\sim 1-0.5\) and then remains constant until \(z=0\).
Figure 5: Redshift evolution of the 5 fitting parameters of the HOD, corresponding to the TNG300 simulation (solid lines) and the SHAM mock (dashed lines). From top to bottom, the properties shown in each panel are \(M_{\rm min}\), \(M_{1}\), \(\sigma_{\rm logM}\), \(\alpha\) and \(M_{\rm cut}\). Different colours represent different number density samples, as labelled. For the lowest number density, \(n=0.001\)\(h^{3}\)Mpc\({}^{-3}\), we only show the parameters corresponding to the centrals occupation (\(M_{\rm min}\) and \(\sigma_{\rm logM}\)), since the constraints on the satellites occupation parameters are poor at higher redshifts (due to the limited amount of haloes with satellite galaxies). Error bars represent the standard deviation from the fitted parameter value.
* The characteristic halo mass for hosting a satellite galaxy, \(M_{1}\), increases linearly (in logarithmic scale) from \(z=3\) to \(z=0\).
* The power-law slope of the satellites occupation, \(\alpha\), is roughly constant from \(z=3\) to \(z\sim 1-0.5\), and then increases linearly until \(z=0\). There are some differences in the behaviour of \(\alpha\) in the TNG300 and the SHAM, which are however not that significant given the level of noise in this parameter.
* The satellites cutoff mass scale, namely the minimum mass of haloes hosting satellites, \(M_{\rm cut}\), increases linearly (in logarithmic scale) from \(z=3\) to \(z\sim 1-0.5\), and then stays constant until \(z=0\) (the same as \(M_{\rm min}\)).
Again, for these HOD parameters, the halo occupations in the TNG300 hydrodynamic simulation and in the SHAM implementation exhibit the same evolution trends. This is the main result of this work, indicating that the evolution of these parameters is the same independent of the galaxy formation physics. These trends agree with those found by C17 studying the evolution of HOD parameters in two distinct SAMs.
To further assess the robustness of our results, we also examine the evolution of these parameters in the EAGLE hydrodynamical simulation (Schaye et al., 2015; Crain et al., 2015), as presented in Appendix A. EAGLE has a smaller volume but a higher resolution than the TNG300, and it was executed with an SPH code rather than an adaptive mesh code like the one used in the TNG300. We find the same evolutionary trends as the ones observed for the TNG300, the SHAM model, and the SAMs. We have additionally studied at the evolution of the HOD in TNG300 when selecting galaxies by \(r\)-band magnitude instead of stellar mass, again finding similar evolutionary trends. We refer the reader to SS 3.4 of C17 for a more in-depth analysis of the evolution of these parameters and a simple parameterization of the evolution of the HOD parameters that can be used in the construction of galaxy samples or the interpretation of clustering data at different redshifts.
As for \(\sigma_{\rm logM}\), this property measures the scatter between the halo mass and stellar mass of a galaxy sample. The prediction of a SHAM without scatter will only measure the dispersion between the halo mass and \(V_{\rm peak}\), which is significantly smaller than the expected scatter between stellar mass and halo mass of a fully physical galaxy formation model. As concluded from previous work (e.g., C17), this parameter should be the one that best captures the physics of galaxy formation of a galaxy sample. However, as demonstrated for example in Zehavi et al. (2011), this parameter, along with \(M_{\rm cut}\), have the weakest constraint from galaxy clustering. Hence, it is not required to model \(\sigma_{\rm logM}\) perfectly when creating mocks that attempt to reproduce realistic galaxy clustering. Nonetheless, the values appear relatively constant with redshift, which makes sense given that we do not anticipate a significant change in the evolution of the stellar mass-halo mass relationship. This is one of the foundations of the SHAM model (see Contreras et al., 2015 for further discussion).
## 4 Origin of the HOD evolution
In SS 3 we studied the evolution of the HOD in the TNG300 hydrodynamical simulation and in a SHAM applied to the dark matter-only simulation, finding that the evolution of the HOD parameters is largely the same. Since no galaxy formation physics is included in our SHAM implementation and it lacks any free parameter that attempts to reproduce the impact of baryonic physics (such as a scatter in the \(V_{\rm peak}\)-stellar mass relation, modifying the dynamical friction model, etc.), it appears that the evolution is independent of galaxy formation physics. This is further corroborated by the overall agreement with results from the EAGLE hydrodynamical simulation (Appendix A) and SAMs applied to the Millennium Simulation (C17). This leads us to conclude that the evolution of the HOD is instead dominated by the cosmological model and the hierarchical growth of structure.
We would still like to discern which aspect of the cosmological picture shapes the evolution of the HOD parameters. One possibility is that, at least for the parameters that represent halo masses (such as \(M_{\rm min}\), \(M_{1}\), and \(M_{\rm cut}\)), the evolution arises from the typical growth of haloes. To determine this, we examined the evolution of these parameters as peak height (\(\nu(M,z)=\delta(M)/\delta(z)\)) values rather than halo masses (not shown here). We found that the changes in peak height were greater than when expressing these parameters in mass units, indicating that the evolution is not (solely) due to how structures grow.
Another factor that can potentially influence how HODs evolve is the values of the cosmological parameters. This is a plausible explanation of the agreement since the TNG300-Dark simulation (on which we run the SHAM mock) and the TNG300 simulation share the same cosmology. Moreover, the growth of dark matter haloes is affected by cosmology. A strong cosmological dependence on the evolution of HODs with any cosmological parameter could imply that we could constrain cosmology based on the HOD evolution we infer from galaxy surveys. However, when examining the evolution of the HOD in SHAM mocks built on simulations with different cosmologies, we find only small changes in the evolution of the parameters. The details of this analysis are presented in Appendix B for eight cosmological parameters. This indicates that the specific flavour of the cosmological model also does not influence much the evolution of the HOD.
Since the details of the cosmological model do not have a significant impact on how the HOD evolves, we deduce that this evolution is governed by the hierarchical way in which haloes and subhalos (and therefore galaxies) form and evolve in the \(\Lambda\)CDM model. This becomes more apparent when we examine the evolution of the ratio of the two halo mass parameters \(M_{1}\) and \(M_{\rm min}\), which is frequently
Figure 6: Redshift evolution of the ratio of the two characteristic halo mass parameters of the HOD, \(M_{1}\) and \(M_{\rm min}\). The predictions from the TNG300 simulation are shown as solid lines, while the dashed lines denote the results from the SHAM mock. The different colours represent different number densities as labelled.
employed to characterise a galaxy population (e.g., Zehavi et al., 2011; Coupon et al., 2012; Guo & White, 2014; Skibba et al., 2015). This ratio represents the mass range over which a halo hosts only a central galaxy from the sample before hosting additional satellite galaxies, giving rise to the "plateau" in the halo occupation function (Fig. 1; see also Watson et al., 2011). Fig. 6 shows the evolution of this ratio for the TNG300 and our SHAM model, where the value of \(M_{1}/M_{\rm min}\) roughly plateaus at high redshift and then increases with time, toward the present.
C17 explored the change in this ratio when assuming alternative "non-hierarchical" evolution models for the galaxies. The models tested were a passive evolution model (e.g., Seo et al., 2008), where galaxies no longer form or merge, a tracking evolution, in which galaxies no longer form but continue to merge, and a descendant clustering selection (Padilla et al., 2010) where galaxies are selected based on the evolution of halo clustering (see Fig. 11 in C17 and discussion thereof). All these models show significantly different evolution for \(M_{1}/M_{\rm min}\), with the ratio decreasing toward lower redshifts, in contrast to the evolution found in our SHAM mocks and the TNG300. The Guo et al. (2013) SAM used in C17 also exhibits the same type of evolution as the models presented in this work. 2
Footnote 2: We note that the Gonzalez-Perez et al. (2014) SAM additionally used in C17 showed some variation in the evolution of \(M_{1}/M_{\rm min}\). This is likely due to the different (sub)halo algorithms employed to Guo et al. (2013), TNG300 and TNG300-Dark, which use the FOF and SUBFIND algorithms (see § 2.1 for more details).
We conclude that the evolution of the HOD is independent of galaxy formation physics, or the specifics of the cosmological model. Any galaxy population that grows hierarchically, such as stellar mass selected galaxies, in a \(\Lambda\)CDM (or similar) framework should exhibit similar evolutionary trends to the ones found in this work.
## 5 Conclusion
In this paper, we look at the evolution of the HOD of stellar mass-selected galaxies from two different models; a magneto-hydrodynamic simulation, the TNG300, and a SHAM mock built from the dark matter-only simulation without any baryonic physics implemented. We characterise the cosmic evolution by fitting the HODs at different redshifts with the standard 5-parameter parametric form (Zheng et al., 2005). Our main findings are as follows:
* The HODs for the TNG300 and the SHAM models are similar at all redshifts and number densities, exhibiting a similar evolution of the halo mass parameters. The one standout is \(\sigma_{\rm logM}\), capturing the width of the transition from zero to one galaxy per halo, which varies between the models.
* The values of \(\sigma_{\rm logM}\) are different for the TNG300 and the SHAM model. This parameter is related to the scatter between halo mass and stellar mass and expected to be dependent on the galaxy formation physics model. At the same time, this parameter has little effect on galaxy clustering and thus it is not always essential to define its value or its evolution with high precision.
* The evolution of the HOD is also similar to that measured in the EAGLE hydrodynamical simulation, and for a M\({}_{\rm r}\) magnitude limited sample in the TNG300 simulation. The evolution of the parameters (other than \(\sigma_{\rm logM}\)) is also similar to that of semi-analytical models of galaxy formation, as explored in C17.
* The evolution of the HOD is largely insensitive to variations of the cosmological parameters, with only \(\sigma_{\rm S}\) and \(w_{0}\) somewhat impacting the shape.
* The values and evolution of the \(M_{1}/M_{\rm min}\) ratio are similar for the TNG300 and the SHAM model. They are also in agreement with the ones found by C17 when analysing a SAM with the same (sub)halo identification algorithm, but different to that found when assuming alternative galaxy evolution models (such as passive evolution).
Based on these results, it appears that the physics of galaxy formation has little impact on the evolution of the HOD for stellar mass-selected samples. Given that the HOD and galaxy clustering of a SHAM model without scatter or any free parameter only depend on the cosmological model assumed in the dark matter simulation on which it is based, we can conclude that the cosmological framework dominates the HOD evolution for this type of galaxies. By cosmological framework here we specifically refer to the hierarchical building of haloes and galaxies, as we have also demonstrated that the values of the cosmological parameters have little impact on the HOD evolution.
The way the HOD parameters evolve in the SHAM model is a strong indication of how consistent and good the model is when populating galaxies at different redshifts, and the potential it has for creating mock galaxy catalogues (given sufficient resolution to follow the subhaloes). Furthermore, our results provide an important simplification to the process of creating mock galaxy catalogues over a large redshift range. They lend significant support for some of the ansatzes accepted today when generating mock galaxy catalogues on simulated lightcones, namely that the HOD evolution model is robust and needn't change based on the assumed galaxy formation model. This robustness, in turn, can facilitate the HOD interpretation of clustering measurements at different redshifts from upcoming large galaxy surveys.
We clarify that the results presented here and subsequent conclusions have only been investigated for galaxy samples selected by stellar mass (and luminosity), that grow hierarchically. The HOD of galaxies selected, for example, by star formation rate may not follow the same pattern. We note that the extension of our work to that is not trivial, as it requires a somewhat more complex HOD model as well as a non-trivial extension of the SHAM methodology (S. Ortega Martinez, in prep.), and we reserve this to future work.
## Data Availability
The IllustrisTNG simulations, including TNG300, are publicly available and accessible at www.tng-project.org/data(Nelson et al., 2019). The data underlying this article will be shared on reasonable request to the corresponding author.
## Acknowledgements
We thank Nelson Padilla, Celeste Artale, Carlton Baugh, Peder Norberg, Shaun Cole and Alex Smith for useful comments and discussions. We acknowledge the hospitality of the ICC at Durham University and the helpful conversations with many of its members. SC acknowledges the support of the "Juan de la Cierva Incorporacion" fellowship (IJC2020-045705-I). IZ was partially supported by a CWRU ACES+ Opportunity Grant. The authors also acknowledge the computer resources at MareNostrum and the technical support provided by Barcelona Supercomputing Center (RES-AECT-2020-3-0014) | ```
TNG300 magneto-hydrodynamicシミュレーションとSHAMを用いたmockカタログを用いて、ハロのoccuppation分布の redshift演化の起源を研究します。私たちは、 redshift範囲 0 ≤ z ≤ 3 にわたる固有の質量選択銀河サンプルを分析し、そのハロoccuppation functionを測定しました。これらのハロoccuppation functionを HOD パラメータにフィットさせ、宇宙の時間経過における進化を研究しました。TNG300 の銀河系集団は、シミュレーションに実装されたbaryonic physics に強く依存しています。対照的に、散乱がない基本的な SHAM モデルによって予測される銀河系集団は、ダークマターシミュレーションの kosmology に直接帰属しています。両モデルの HOD 進化は類似しており、半解析的モデルにおける HOD 進化の研究と一致しています。特に、中心銀河とsatellit銀河の宿主質 |
2309.16816 | PROSE: Predicting Operators and Symbolic Expressions using Multimodal
Transformers | "Approximating nonlinear differential equations using a neural network\nprovides a robust and effici(...TRUNCATED) | Yuxuan Liu, Zecheng Zhang, Hayden Schaeffer | 2023-09-28T19:46:07 | http://arxiv.org/abs/2309.16816v1 | "# PROSE: Predicting Operators and Symbolic Expressions using Multimodal Transformers\n\n###### Abst(...TRUNCATED) | "線形近似しない微分方程式をニューラルネットワークを用いて近似するこ(...TRUNCATED) |
2309.09936 | A Concise Overview of Safety Aspects in Human-Robot Interaction | "As of today, robots exhibit impressive agility but also pose potential\nhazards to humans using/col(...TRUNCATED) | "Mazin Hamad, Simone Nertinger, Robin J. Kirschner, Luis Figueredo, Abdeldjallil Naceri, Sami Haddad(...TRUNCATED) | 2023-09-18T16:52:48 | http://arxiv.org/abs/2309.09936v1 | "# A Concise Overview of Safety Aspects\n\n###### Abstract\n\nAs of today, robots exhibit impressive(...TRUNCATED) | "robots は今日の時点で素晴らしい運動能力を発揮していますが、人間と協(...TRUNCATED) |
2310.08593 | Data-driven methods for diffusivity prediction in nuclear fuels | "The growth rate of structural defects in nuclear fuels under irradiation is\nintrinsically related (...TRUNCATED) | "Galen T. Craven, Renai Chen, Michael W. D. Cooper, Christopher Matthews, Jason Rizk, Walter Malone,(...TRUNCATED) | 2023-09-07T16:28:50 | http://arxiv.org/abs/2310.08593v1 | "# Data-driven methods for diffusivity prediction in nuclear fuels\n\n###### Abstract\n\nThe growth (...TRUNCATED) | "原子燃料の照射による構造欠陥の成長率は、欠陥の拡散速度に内在的に関(...TRUNCATED) |
2310.20579 | Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks | "We analytically investigate how over-parameterization of models in randomized\nmachine learning alg(...TRUNCATED) | Jiayuan Ye, Zhenyu Zhu, Fanghui Liu, Reza Shokri, Volkan Cevher | 2023-10-31T16:13:22 | http://arxiv.org/abs/2310.20579v1 | "# Initialization Matters: Privacy-Utility Analysis of Overparameterized Neural Networks\n\n###### A(...TRUNCATED) | "モデルのオーバーパラメータ化がランダム化された機械学習アルゴリズム(...TRUNCATED) |
2309.14967 | A novel approach for holographic 3D content generation without depth map | "In preparation for observing holographic 3D content, acquiring a set of RGB\ncolor and depth map im(...TRUNCATED) | Hakdong Kim, Minkyu Jee, Yurim Lee, Kyudam Choi, MinSung Yoon, Cheongwon Kim | 2023-09-26T14:37:31 | http://arxiv.org/abs/2309.14967v1 | "# A Novel Approach for Holographic 3D Content Generation Without Depth Map\n\n###### Abstract\n\nIn(...TRUNCATED) | "ホログラフィック3Dコンテンツを観察するために準備するにあたって、シ(...TRUNCATED) |
2301.07687 | Maybe, Maybe Not: A Survey on Uncertainty in Visualization | "Understanding and evaluating uncertainty play a key role in decision-making.\nWhen a viewer studies(...TRUNCATED) | Krisha Mehta | 2022-12-14T00:07:06 | http://arxiv.org/abs/2301.07687v1 | "# Maybe, Maybe Not: A Survey on Uncertainty in Visualization\n\n###### Abstract\n\nUnderstanding an(...TRUNCATED) | "## 不確実性を理解して評価することは、意思決定にとって重要な役割を(...TRUNCATED) |
End of preview. Expand
in Dataset Viewer.
This dataset is a translation of the abstract
column of the neuralwork/arxiver
into Japanese using the google/gemma-2-2b-it
model.
The license follows the original dataset's CC BY-NC-SA 4.0 License.
The translation was performed using text2dataset.
- Downloads last month
- 102